text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#27001 closed Bug (fixed)
Regression in query counts using RadioSelect with ModelChoiceField
Description
Before 1.9, I had on my list of things to look at the fact that a standard ModelChoiceField with a RadioSelect takes two queries. Now looking at this in 1.10, the number of queries has ballooned (to 11 queries in my simple test case).
I've not got far in looking for a solution, but this test passes in 1.9 and fails in 1.10 and master with
AssertionError: 11 != 2 : 11 queries executed, 2 expected:
def test_radioselect_num_queries(self): class CategoriesForm(forms.Form): categories = forms.ModelChoiceField( queryset=Category.objects.all(), widget=forms.RadioSelect ) template = Template('{% for w in form.categories %}{{ w }}{% endfor %}') with self.assertNumQueries(2): template.render(Context({'form': CategoriesForm()}))
Change History (8)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
comment:3 Changed 4 years ago by
Created a pull request with a fix.
ChoiceFieldRenderer doesn't have
__iter__ defined, so Python iterates over it by calling
__getitem__ with an increasing index until an exception is raised.
ChoiceFieldRenderer.__getitem__ calls
list on itself, which turns iteration into an n2 operation. When the choices are backed by a queryset as in ModelChoiceField, that means lots of redundant database queries.
Fixed by adding an
__iter__ method to
ChoiceFieldRenderer and changing
__getitem__ to use it, so that indexing still works.
comment:4 Changed 4 years ago by
comment:5 Changed 4 years ago by
comment:6 Changed 4 years ago by
As this is a regression in 1.10 I suppose this should be backported.
Branch with failing test here: | https://code.djangoproject.com/ticket/27001 | CC-MAIN-2020-34 | refinedweb | 284 | 57.87 |
Got the following error message from an application:
Exception occurred during event dispatching:
javax.swing.text.StateInvariantError: infinite loop in formatting
at javax.swing.text.FlowView$FlowStrategy.layout(FlowView.java:429)
at javax.swing.text.FlowView.layout(FlowView.java:182)
at javax.swing.text.BoxView.setSize(BoxView.java:265)
at javax.swing.text.BoxView.layout(BoxView.java:600)
at javax.swing.text.BoxView.setSize(BoxView.java:265)
at javax.swing.plaf.basic.BasicTextUI$RootView.paint(BasicTextUI.java:11
69)
at javax.swing.plaf.basic.BasicTextUI.paintSafely(BasicTextUI.java:523)
at javax.swing.plaf.basic.BasicTextUI.paint(BasicTextUI.java:657)
at javax.swing.plaf.basic.BasicTextUI.update(BasicTextUI.java:636)
at javax.swing.JComponent.paintComponent(JComponent.java:398)
at javax.swing.JComponent.paint(JComponent.java:739)
at javax.swing.JComponent.paintChildren(JComponent.java:523)
at javax.swing.JComponent.paint(JComponent.java:748)
at javax.swing.JComponent.paintChildren(JComponent.java:523)
at javax.swing.JComponent.paint(JComponent.java:748)
at javax.swing.JLayeredPane.paint(JLayeredPane.java:546)
at javax.swing.JComponent.paintChildren(JComponent.java:523)
at javax.swing.JComponent.paint(JComponent.java:719)
at java.awt.GraphicsCallback$PaintCallback.run(GraphicsCallback.java:23)
at sun.awt.SunGraphicsCallback.runOneComponent(SunGraphicsCallback.java:
54)
at sun.awt.SunGraphicsCallback.runComponents(SunGraphicsCallback.java:91
)
at java.awt.Container.paint(Container.java:960)
at sun.awt.RepaintArea.paint(RepaintArea.java:298)
at sun.awt.windows.WComponentPeer.handleEvent(WComponentPeer.java:193)
at java.awt.Component.dispatchEventImpl(Component.java:2665):10
3)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:84)
Expected behavior: The text is shown
Actual behavior: Throws exception and text is never shown
Was it working before? If so, what was changed?
This is the first time that we tested it.
To Reproduce:
Run Bug with a value like 40000 - it happens *EVERY* time. We've tried
values down to 33000 and it still happens.
It's caused by bugs in javax.swing.text.GlyphView.java - which uses a short
to represent the offset and length into a Document. However if you set the
document with text that is 32767 or less characters the problem never occurs.
There is no workaround other than not setting text that long !!
Standalone Test Case:
import javax.swing.*;
public class Bug {
/** Run with a number indicating the number of characters in the string **/
public static void main(String aArgs[])
throws Throwable {
if (aArgs.length != 1) {
System.err.println("Usage: Bug [No of characters]");
System.exit(1);
}
int lLength = Integer.parseInt(aArgs[0]);
JFrame lFrame = new JFrame("TestMe");
JTextPane lText = new JTextPane();
lText.setEditable(false);
lText.setOpaque(false);
lFrame.getContentPane().add(lText);
lText.setText(getText(lLength));
lFrame.setSize(500, 500);
lFrame.setVisible(true);
}
private static String getText(int aLen) {
StringBuffer lBuff = new StringBuffer();
while (aLen > 6) {
lBuff.append("12 21 ");
aLen -= 6;
}
while (aLen > 0) {
lBuff.append("a");
aLen--;
}
return lBuff.toString();
}
}
Add some linefeeds to keep paragraphs under Math.MAX_SHORT in length or
provide your own GlyphView implementation from the ViewFactory.
This is intentional behavior. It is a huge memory savings
to use shorts instead of ints since so many of these get created.
The limitation is that the longest GlyphView can be is
Math.MAX_SHORT characters at an offset of Math.MAX_SHORT
from Element.getStartOffset(). We feel this is a reasonable
restriction. Element.getStartOffset is limited to Math.MAX_INT
so documents are not limited as claimed in the bug description.
The GlyphView restriction simply means a paragraph cannot exceed
Math.MAX_SHORT.
xxxxx@xxxxx 2001-03-13
Are you crazy? It is *intentional* that it throws
exceptions?
If a webpage is loaded with a large paragraph, it is
*acceptable* that the page should fail to load and throw an
exception?
I am trying to use a TextPane to view log files, and get
the same problem.. The file has about 260000 characters in
it, and I don't believe it has any lines in it anywhere
32767 characters. All lines have \n linefeeds, and I blow
up every time I load. This NEEDS to be resolved.
Well, thank you, pals!
This Error (note that it's not just an exception but an
error) almost terminated a project we are trying to sell!
Try displaying a 60K sized, concatenated BASE64-encoded
String with JEditorPane, and try to figure out why the
application goes haywire!
If you would at least catch this error internally and throw
a reasonable exception, I could see your point, but the way
you approach this bug right now is plain ridiculous!
Even Lotus Notes didn't expect this behaviour and won't let
you access mails with paragraphs this long... I too suggest
you ought to rethink this issue.
I have filed another bug with Sun, including outlining the
workaround described here that requires no more memory.
Hopefully they won't close it and ignore customer input when
a win/win solution obviously exists.
Your customers don't feel that this is a reasonable
restriction, much less a reasonable implementation. The cast
to short blindly and silently loses data. Code like this
should never be allowed in what is supposedly a
production-quality system.
This entire problem could be very easily resolved by simply
performing an architectural tweak:
1. Make GlyphView an abstract class.
2. Make two (or more) subclasses of GlyphView; one that
stores offset and length as short (if memory savings are
that important) and one that stores them as int (so you can
actually do what your public interface indicates you can do,
and not lose data.) Call them, say, GlyphViewShort and
GlyphViewInt.
3. Make createFragment(int p0, int p1) in GlyphView a
factory method that constructs the appropriate-sized
GlyphViewShort or GlyphViewInt depending on the size of p0
and p1.
4. Instead of explicitly calling the GlyphView constructor,
create and call a factory method that constructs the
appropriately-sized GlyphView.
5. Instead of accessing length and offset directly, access
them with a method int getLength() and int getOffset().
Thats it. It's that easy. It should take no more than 5
minutes to make all of your customers happy.
That's a lot better than telling your customers that their
code is unreasonable (even when you don't know what they
need to do,) and fail silently, or bomb out horribly if the
sign bits are wrong when you cast to short.
If you *really* want to save space for short paragraphs, you
can make a derived class GlyphViewByte that stores offset,
length, and that poorly-named variable "x" into bytes if
they're all small enough to fit.
I also noted in the other bug report that exceptions/errors
aren't thrown reliably--this can simply cause silent failure
too.
After I filed the bug on this, along with a solution on how
to fix it for real, Sun never responded. Good job, guys.
If anyone at Sun can justify how a blind, narrowing cast to
short is a good idea in production code, please let us know.
Again, I'll be happy to give you a much better solution
(that not only works in all cases, but saves *more* memory
than your bad bugfix here) as I outlined in the bug report I
filed.
To be fair, Sun did respond--almost seven months after I
filed the bug report. They indicate that this bug was fixed
in JDK 1.5.0 beta, and it seems to be. I'm doing Sun's job
for them and indicating that here, as they haven't. | http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4425177 | crawl-002 | refinedweb | 1,247 | 52.46 |
Plot MultiAxes tutorial/ru
Please complete the basic tutorial before starting with this tutorial. In this tutorial we will learn how to create and edit a multiaxes plot. You can learn more about the Plot Workbench here.
Multiaxes plot example
In the image you can see the result that we will approximately obtain. Following this tutorial you will learn:
- How to create a multiaxes Plot from the Python console.
- How to edit axes properties.
- How to control the grid and the legend when several axes sets are present.
- How to edit the position of labels, titles and legends.
Plotting data
As we did in the previous tutorial we will use the Python console or macros to plot the data, but in this case we will plot the data using two axes sets.
Creating plot data
In this example we will plot 3 functions, the two used in the previous tutorial, and a new polynomial one. The range of the polynomial function is different from the other functions therefore new axes are required. The next commands will create the data arrays for us:
import math p = range(0,1001) x = [2.0*xx/1000.0 for xx in p] y = [xx**2.0 for xx in x] t = [tt/1000.0 for tt in p] s = [math.sin(math.pi*2.0*tt) for tt in t] c = [math.cos(math.pi*2.0*tt) for tt in t]
As x moves from 0 to 2, the y function has a maximum value of 4, so if we try to plot this function with the trigonometrical ones, at least one function will be truncated or badly scaled, therefore we we need a multiaxes plot. A multiaxes plot in FreeCAD is intended to get a plot with multiple axes, not to get multiple plots in the same document.
Drawing functions, adding new axes
We will plot the trigonometrical functions using the main axes. If all your axes have the same size it is not relevant which function is plotted first. But if this is not the case the function that uses the biggest axes, in our case the polynomial function, should be plotted last. The legend will be attached to the last axes system and it is more convenient if this is the biggest. To plot the trigonometrical functions we only need to launch some commands.
try: from FreeCAD.Plot import Plot except ImportError: from freecad.plot import Plot Plot.plot(t,s,r"$\sin\left( 2 \pi t \right)$") Plot.plot(t,c,r"$\cos\left( 2 \pi t \right)$")
In this example we pass the series labels for the legend directly. Note that the label strings have the r prefix in order to prevent Python from trying to interpret special characters (the \ symbol is used frequently in LaTeX syntax).
Before we can plot the polynomial function, we need to create new axes. In the Plot Workbench new axes are automatically selected as the active ones, and new plots will be associated with these axes.
Plot.addNewAxes() Plot.plot(x,y,r"$x^2$")
As you can see your plot has gone crazy, with axes ticks overlapping, curves of the same color, etc. Now we need to use the Plot Workbench to fix this graph.
Configuring plot
Configuring axes
The Plot Workbench provides a tool to modify the properties of axes.
Axes configuration tool icon
With the axes tool you can add or remove axes, and set the active axes, which are then used if you plot more data.
To change the size of the first axes set, associated with the trigonometrical functions, it has to be activated first by changing the active axes from 1 to 0. We can then move the horizontal and vertical dimension sliders to reduce its size (try to emulate the example). We also need to change the alignment of the axes: select top and right respectively.
Configuring series
Set the series properties as we did in the previous tutorial.
Showing grid and legend
The grid and legend can be shown, and hidden, with the tools already described in the previous tutorial, but in this case the behavior is a little different because there are two axes sets.
Grid lines are added to the active axes set. To add lines to the second axes set in our example, it has to be activated first by changing the active axes from 0 to 1 in the axes tool.
As already mentioned the legend will be positioned relative last axes set. If you show the legend now you will see that it is really badly placed, but we will fix that later.
Setting axes labels
When it comes to setting the axes labels we again have to deal with our two axes sets. But since labels are usually set for all axes, the procedure is the same as described in the previous tutorial. The Plot Workbench allows you to set a title per axes set. In this case we only want to set a title for the last, the biggest, axes set.
Axes 0:
- X Label = $t$
- Y Label = $\mathrm{f} \left( t \right)$
Axes 1:
- Title = Multiaxes example
- X Label = $x$
- Y Label = $\mathrm{f} \left( x \right)$
Change the font size of all labels to 20, and the font size of the title to 24. Again there is an element, the title, that is badly placed.
Setting elements position
The Plot Workbench provides a tool to change the position of several plot elements, such as as titles, labels and legends.
Position editor icon
When you run the tool you will see a list of all editable elements. Titles and legends can be moved in both directions, but axis labels can only be moved along the axis they belong to. Select the title of axes 1 and move it to (0.24,1.01), then select the legend and move it to a better position. You can increase the font size of the legend labels as well.
Saving plot
Now you can save your work. See the previous tutorial if you don't remember how to do it.
- | https://wiki.freecadweb.org/Plot_MultiAxes_tutorial/ru | CC-MAIN-2022-05 | refinedweb | 1,021 | 62.98 |
When an array is created, each element of the array is set to default initial value according to its type. However, it is also possible to provide value other than the default values to each element of the array.
This is made possible using array initialization. Arrays can be initialized by providing values using a comma separated list within curly braces following their declaration. The syntax for array initialization is
datatype[] arrayName = {value1, value2,………………. };
For example : The statement
int[] num = {5,15,25,30,50};
creates a five element array with index values 0,1,2,3,4. The element num [0] is initialized to 5, num [1] is initialized to 15 and so on.
Note that we do not use the new keyword and do not specify the size of the array also. This is because when the compiler encounters such a statement it counts the number of element in the list to determine the size of the array and performs the task performed by the new operator. Initializing array is useful if you know the exact number and values of elements in an array at compile time.
The above initialization statement is equivalent to
int[] num = new int[5];
num [0] = 5 ;
num [1] = 15;
num [2] = 25;
num [3] = 30;
num [4] = 40;
Similarly, you can initialize an array by providing objects in the list. For example
Rectangle[] rect = {new Rectangle(2,3),new Rectangle()};
It will create rect array of Rectangle objects containing 2 elements.
There is a variation of array initialization in which we use the new keyword explicitly but omit the array length as it is determined from the initializer list. For example:
int[] num = new int[]{5,15,25,30,50};
This form of initialization is useful if you need to declare an array in one location but populate it in another while still taking the advantage of array initialization. For example:
displayData(new String[]{"apple","orange","litchi"});
An unnamed array created in such a way is called anonymous arrays.
public class InitializingArray
{
public static void main(String[] args)
{
int[] num = new int[]{5,15,25,30,500};
for(int i=0;i<num.length;i++)
System.out.println("num ["+i+"] : " +num[i]);
display(new String[] {"apple","orange","1itchi"});
}
static void display(String[] str)
{
for(int i=0;i<str.length;i++)
System.out.println("str ["+i+"] :" | http://ecomputernotes.com/java/array/initializing-array-in-java | CC-MAIN-2019-39 | refinedweb | 394 | 50.97 |
!- Search Loader --> <!- /Search Loader -->
Hi there,
I am having an issue with FORTRAN/C interop using Intel FORTRAN that I'm hoping someone can help me with.
It is illustrated by the contrived example below:
library.f90:
module m implicit none abstract interface subroutine callback(data_ptr) bind (c) use iso_c_binding type(c_ptr), intent(in), value :: data_ptr end subroutine callback end interface end module subroutine library_fn(c_fn_ptr, data_ptr) bind(c) ! Is this still the recommended way to indicate the exported symbols even with FORTRAN 2003 support? !DEC$ ATTRIBUTES DLLEXPORT :: library_fn use iso_c_binding use m implicit none type(c_funptr), intent (in), value :: c_fn_ptr type(c_ptr), intent (in), value :: data_ptr integer(kind=C_INTPTR_T) :: address procedure(callback), pointer :: f_fn_ptr address = transfer(data_ptr, address) write(*,'("library::library_fn::address = ", "0x",Z12)') address call c_f_procpointer(c_fn_ptr, f_fn_ptr) call f_fn_ptr(data_ptr) end subroutine library_fn
main.cxx:
#include <iostream> using namespace std; // Implemented by FORTRAN dll extern "C" void library_fn(void (*fn_ptr)(void*), void* data_ptr); void callback(void* data_ptr) { cout << "main::callback::data_ptr = " << data_ptr << endl; int i = *static_cast<int*>(data_ptr); cout << "main::callback::i = " << i << endl; } int main() { int i = 792; cout << "main::main::i = " << i << endl; void* data_ptr = static_cast<void*>(&i); cout << "main::main::data_ptr = " << data_ptr << endl; library_fn(&callback, data_ptr); }
Building on Win64:
Intel(R) Visual Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.0.3.175 Build 20110309
ifort /c library.f90 lib /out:library.lib library.obj cl main.cxx library.lib
Building on Linux:
GNU Fortran (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6)
gfortran -shared -fPIC library.f90 -o library.so g++ main.cxx library.so
Sample run on Win64:
[thwill@PSEUK1149(master)]$ ./main.exe main::main::i = 792 main::main::data_ptr = 00000000002CFE70 library::library_fn::address = 0x 2CFE70 main::callback::data_ptr = 00000000002CFE58 <---- INCORRECT! main::callback::i = 2948720
Sample run on 64-bit Linux:
[thwill@pseuk1149-centos7-vm(master)]$ ./a.out main::main::i = 792 main::main::data_ptr = 0x7ffc97963a54 library::library_fn::address = 0x7FFC97963A54 main::callback::data_ptr = 0x7ffc97963a54 <---- CORRECT! main::callback::i = 792
As you can see the data pointer has become "corrupted" on Win64 when it is passed back from the FORTRAN into the C.
FORTRAN is not something I use very often and until a fortnight ago I restricted myself to FORTRAN-77, so I may be making a stupid mistake, but given it works with gfortran perhaps not?
Any ideas?
Regards,
Tom Williams
I do not see the error with the 15.0 and later compilers.
Thomas W. wrote:
.. FORTRAN is not something I use very often and until a fortnight ago I restricted myself to FORTRAN-77, so I may be making a stupid mistake, but given it works with gfortran perhaps not?
Note it's "Fortran" now!
As mentioned by mecej4, more recent incarnations of the compiler including the current release, compiler 17 update 1, work as expected.
By the way, with respect to name-mangling that can take across the compilers and platforms with mixed language applications, you may want to look into the optional NAME= specifier with BIND(C attribute:
[fortran]
subroutine library_fn(c_fn_ptr, data_ptr) bind(c) ! Is this still the recommended way to indicate the exported symbols even with FORTRAN 2003 support? !DEC$ ATTRIBUTES DLLEXPORT :: library_fn [/fortran]
Note this has mostly to do with OS, not the language and on Windows, you may want to keep the option of DEF files in mind:
Hi folks,
Thanks for the replies. I have tried out the latest Intel Fortran compiler and can confirm that it seems to fix the issue, so I guess this was a compiler bug?
Regards,
Tom
Good to hear Tom. Yes, this is an apparent defect in the older 12.0 compiler fixed in a later release. Nothing in the example code shown can be attributed to the incorrect pointer value.
Many thanks to everyone who contributed here.
Just a final note that it actually works with Intel Fortran 12 update 9; the failing version was update 3.
Regards,
Tom | https://community.intel.com/t5/Intel-Fortran-Compiler/Problem-passing-pointer-from-C-to-Intel-FORTRAN-and-back-to-C/td-p/1080614 | CC-MAIN-2020-50 | refinedweb | 664 | 55.03 |
papa 0.9.3
Simple socket and process kernel
Summary
papa is a process kernel. It contains both a client library and a server component for creating sockets and launching processes from a stable parent process.
Dependencies
Papa has no external dependencies, and it never will! It has been tested under the following Python versions:
- 2.6
- 2.7
- 3.2
- 3.3
- 3.4
Installation
$> pip install papa
Purpose
Sometimes you want to be able to start a process and have it survive on its own, but you still want to be able to capture the output. You could daemonize it and pipe the output to files, but that is a pain and lacks flexibility when it comes to handling the output.
Process managers such as circus and supervisor are very good for starting and stopping processes, and for ensuring that they are automatically restarted when they die. However, if you need to restart the process manager, all of their managed processes must be brought down as well. In this day of zero downtime, that is no longer okay.
Papa is a process kernel. It has extremely limited functionality and it has zero external dependencies. If I’ve done my job right, you should never need to upgrade the papa package. There will probably be a few bug fixes before it is really “done”, but the design goal was to create something that did NOT do everything, but only did the bare minimum required. The big process managers can add the remaining features.
Papa has 3 types of things it manages:
- Sockets
- Values
- Processes
Here is what papa does:
- Create sockets and close sockets
- Set, get and clear named values
- Start processes and capture their stdout/stderr
- Allow you to retrieve the stdout/stderr of the processes started by papa
- Pass socket file descriptors and port numbers to processes as they start
Here is what it does NOT do:
- Stop processes
- Send signals to processes
- Restart processes
- Communicate with processes in any way other than to capture their output
Sockets
By managing sockets, papa can manage interprocess communication. Just create a socket in papa and then pass the file descriptor to your process to use it. See the Circus docs for a very good description of why this is so useful.
Papa can create Unix, INET and INET6 sockets. By default it will create an INET TCP socket on an OS-assigned port.
You can pass either the file descriptor (fileno) or the port of a socket to a process by including a pattern like this in the process arguments:
- $(socket.my_awesome_socket_name.fileno)
- $(socket.my_awesome_socket_name.port)
Values
Papa has a very simple name/value pair storage. This works much like environment variables. The values must be text, so if you want to store a complex structure, you will need to encode and decode with something like the JSON module.
The primary purpose of this facility is to store state information for your process that will survive between restarts. For instance, a process manager can store the current state that all of its managed processes are supposed to be in. Then if the process manager is restarted, it can restore its internal state, then go about checking to see if anything on the machine has changed. Are all processes that should be running actually running?
Processes
Processes can be started with or without output management. You can specify a maximum size for output to be cached. Each started process has a management thread in the Papa kernel watching its state and capturing output if necessary.
A Note on Naming (Namespacing)
Sockets, values and processes all have unique names. A name can only represent one item per class. So you could have an “aack” socket, an “aack” value and an “aack” process, but you cannot have two “aack” processes.
All of the monitoring commands support a final asterix as a wildcard. So you can get a list of sockets whose names match “uwsgi*” and you would get any socket that starts with “uwsgi”.
One good naming scheme is to prefix all names with the name of your own application. So, for instance, the Circus process manager can prefix all names with “circus.” and the Supervisor process manager can prefix all names with “supervisor.”. If you write your own simple process manager, just prefix it with “tweeter.” or “facebooklet.” or whatever your project is called.
If you need to have multiple copies of something, put a number after a dot for each of those as well. For instance, if you are starting 3 waitress instances in circus, call them circus.waitress.0, circus.waitress.1, and circus.waitress.2. That way you can query for all processes named circus.* to see all processes managed by circus, or query for circus.waitress.* to see all waitress processes managed by circus.
Starting the kernel
There are two ways to start the kernel. You can run it as a process, or you can just try to access it from the client library and allow it to autostart. The client library uses a lock to ensure that multiple threads do not start the server at the same time but there is currently no protection against multiple processes doing so.
By default, the papa kernel process will communicate over port 20202. You can change this by specifying a different port number or a path. By specifying a path, a Unix socket will be used instead.
If you are going to be creating papa client instances in many places in your code, you may want to just call papa.set_default_port or papa.set_default_path once when your application is starting and then just instantiate the Papa object with no parameters.
Telnet interface
Papa has been designed so that you can communicate with the process kernel entirely without code. Just start the Papa server, then do this:
telnet localhost 20202
You should get a welcome message and a prompt. Type “help” to get help. Type “help process” to get help on the process command.
The most useful commands from a monitoring standpoint are:
- sockets
- processes
- values
All of these can by used with no arguments, or can be followed by a list of names, including wildcards. For instance, to see all of the values in the circus and supervisor namespaces, do this:
values circus.* supervisor.*
Creating a Connection
You can create either long-lived or short-lived connections to the Papa kernel. If you want to have a long-lived connection, just create a Papa object to connect and close it when done, like this:
class MyObject(object): def __init__(self): self.papa = Papa() def start_stuff(self): self.papa.make_socket('uwsgi') self.papa.make_process('uwsgi', 'env/bin/uwsgi', args=('--ini', 'uwsgi.ini', '--socket', 'fd://$(socket.uwsgi.fileno)'), working_dir='/Users/aackbar/awesome', env=os.environ) self.papa.make_process('http_receiver', sys.executable, args=('http.py', '$(socket.uwsgi.port)'), working_dir='/Users/aackbar/awesome', env=os.environ) def close(self): self.papa.close()
If you want to just fire off a few commands and leave, it is better to use the with mechanism like this:
from papa import Papa with Papa() as p: print(p.sockets()) print(p.make_socket('uwsgi', port=8080)) print(p.sockets()) print(p.make_process('uwsgi', 'env/bin/uwsgi', args=('--ini', 'uwsgi.ini', '--socket', 'fd://$(socket.uwsgi.fileno)'), working_dir='/Users/aackbar/awesome', env=os.environ)) print(p.make_process('http_receiver', sys.executable, args=('http.py', '$(socket.uwsgi.port)'), working_dir='/Users/aackbar/awesome', env=os.environ)) print(p.processes())
This will make a new connection, do a bunch of work, then close the connection.
Socket Commands
There are 3 socket commands.
p.sockets(*args)
The sockets command takes a list of socket names to get info about. All of these are valid:
- p.sockets()
- p.sockets('circus.*')
- p.sockets('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
A dict is returned with socket names as keys and socket details as values.
p.make_socket(name, host=None, port=None, family=None, socket_type=None, backlog=None, path=None, umask=None, interface=None, reuseport=None)
All parameters are optional except for the name. To create a standard TCP socket on port 8080, you can do this:
p.make_socket('circus.uwsgi', port=8080)
To make a Unix socket, do this:
p.make_socket('circus.uwsgi', path='/tmp/uwsgi.sock')
A path for a Unix socket must be an absolute path or make_socket will raise a papa.Error exception.
You can also leave out the path and port to create a standard TCP socket with an OS-assigned port. This is really handy when you do not care what port is used.
If you call make_socket with the name of a socket that already exists, papa will return the original socket if all parameters match, or raise a papa.Error exception if some parameters differ.
See the make_sockets method of the Papa object for other parameters.
p.close_socket(*args)
The close_socket command also takes a list of socket names. All of these are valid:
- p.close_socket('circus.*')
- p.close_socket('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
Closing a socket will prevent any future processes from using it, but any processes that were already started using the file descriptor of the socket will continue to use the copy they inherited.
Value Commands
There are 4 value commands.
p.values(*args)
The values command takes a list of values to retrieve. All of these are valid:
- p.values()
- p.values('circus.*')
- p.values('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
A dict will be returned with all matching names and values.
p.set(name, value=None)
To set a value, do this:
p.set('circus.uswgi', value)
You can clear a single value by setting it to None.
p.get(name)
To retrieve a value, do this:
value = p.get('circus.uwsgi')
If no value is stored by that name, None will be returned.
p.clear(*args)
To clear a value or values, do something like this:
- p.clear('circus.*')
- p.clear('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
You cannot clear all variables so passing no names or passing * will raise a papa.Error exception.
Process Commands
There are 4 process commands:
p.processes(*args)
The processes command takes a list of process names to get info about. All of these are valid:
- p.processes()
- p.processes('circus.*')
- p.processes('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
A dict is returned with process names as keys and process details as values.
p.make_process(name, executable, args=None, env=None, working_dir=None, uid=None, gid=None, rlimits=None, stdout=None, stderr=None, bufsize=None, watch_immediately=None)
Every process must have a unique name and an executable. All other parameters are optional. The make_process method returns a dict that contains the pid of the process.
The args parameter should be a tuple of command-line arguments. If you have only one argument, papa conveniently supports passing that as a string.
You will probably want to pass working_dir. If you do not, the working directory will be that of the papa kernel process.
By default, stdout and stderr are captured so that you can retrieve them with the watch command. By default, the bufsize for the output is 1MB.
Valid values for stdout and stderr are papa.DEVNULL and papa.PIPE (the default). You can also pass papa.STDOUT to stderr to merge the streams.
If you pass bufsize=0, not output will be recorded. Otherwise, bufsize can be the number of bytes, or a number followed by ‘k’, ‘m’ or ‘g’. If you want a 2 MB buffer, you can pass bufsize='2m', for instance. If you do not retrieve the output quicky enough and the buffer overflows, older data is removed to make room.
If you specify uid, it can be either the numeric id of the user or the username string. Likewise, gid can be either the numeric group id or the group name string.
If you want to specify rlimits, pass a dict with rlimit names and numeric values. Valid rlimit names can be found in the resources module. Leave off the RLIMIT_ prefix. On my system, valid names are as, core, cpu, data, fsize, memlock, nofile, nproc, rss, and stack.
rlimit={'cpu': 2, 'nofile': 1024}
The env parameter also takes a dict with names and values. A useful trick is to do env=os.environ to copy your environment to the new process.
If you want to run a Python application and you wish to use the same Python executable as your client application, a useful trick is to pass sys.executable as the executable and the path to the Python script as the first element of your args tuple. If you have no other args, just pass the path as a string to args.
p.make_process('write3', sys.executable, args='executables/write_three_lines.py', working_dir=here, uid=os.environ['LOGNAME'], env=os.environ)
The final argument that needs mention is watch_immediately. If you pass True for this, papa will make the process and return a Watcher. This is effectively the same as doing p.make_process(name, ...) followed immediately by p.watch(name), but it has one fewer round-trip communication with the kernel. If all you want to do is launch an application and monitor its output, this is a good way to go.
p.close_output_channels(*args)
If you do not care about retrieving the output or the exit code for a process, you can use close_output_channels to tell the papa kernel to close the output buffers and automatically remove the process from the process list when it exits.
- p.close_output_channels('circus.logger')
- p.close_output_channels('circus.uwsgi', 'circus.nginx.*', 'circus.logger')
p.watch(*args)
The watch command returns a Watcher object for the specified process or processes. That object uses a separate socket to retrieve the output of the processes it is watching.
Optimization Note: Actually, it hijacks the socket of your Papa object. If you issue any other commands to the Papa object that require a connection to the kernel, the Papa object will silently create a new socket and connect up for the additional commands. If you close the Watcher and the Papa object has not already created a new connection, the socket will be returned to the Papa object. So if you launch an application, use watch to grab all of its output until it closes, then use the set command to update your saved status, all of that can occur with a single connection.
The Watcher object
When you use watch or when you do make_process with watch_immediately=True, you get back a Watcher object.
You can use watchers manually or with a context manager. Here is an example without a context manager:
class MyLogger(object): def __init__(self, watcher): self.watcher = watcher def save_stuff(self): if self.watcher and self.watcher.ready: out, err, closed = self.watcher.read() ... save it ... self.watcher.acknowledge() # remove it from the buffer def close(self): self.watcher.close()
If you are running your logger in a separate thread anyway, you might want to just use a context manager, like this:
with p.watch('aack') as watcher: while watcher: out, err, closed = watcher.read() # block until something arrives ... save it ... watcher.acknowledge() # remove it from the buffer
The Watcher object has a fileno method, so it can be used with select.select, like this:
watchers = [] watchers.append(p.watch('circus.uwsgi')) watchers.append(p.watch('circus.nginx')) watchers.append(p.watch('circus.mongos.*')) while watchers: ready_watchers = select.select(watchers, [], [])[0] # wait for one of these for watcher in ready_watchers: # iterate through all that are ready out, err, closed = watcher.read() ... save it ... watcher.acknowledge() if not watcher: # if it is done, remove this watcher from the list watcher.close() del watchers[watcher]
Of course, in the above example it would have been even more efficient to just use a single watcher, like this:
with p.watch('circus.uwsgi', 'circus.nginx', 'circus.mongos.*') as watcher: while watcher: out, err, closed = watcher.read() ... save it ... # watcher.acknowledge() - no need since watcher.read will do it for us
w.ready
This property is True if the Watcher has data available to read on the socket.
w.read()
Read will grab all waiting output from the Watcher and return a tuple of (out, err, closed). Each of these is an array of papa.ProcessOutput objects. An output object is actually a namedtuple with 3 values: name, timestamp, and data.
The name element is the name of the process. The timestamp is a float of when the data was captured by the papa kernel. The data is a binary string if found in either the out or err array. It is the exit code if found in the closed array. Using all of these elements, you can write proper timestamps into your logs, even if data was captured by the papa kernel minutes, hours or days earlier.
The read method will block if no data is ready to read. If you do not want to block, use either the ready property or a mechanism such as select.select before calling read.
w.acknowledge()
Just because your have read output from a process, the papa kernel cannot know that you successfully logged it. Maybe you crashed or were shutdown before you had the chance. So the papa kernel will hold onto the data until you acknowledge receipt. This can be done either by calling acknowledge, or by doing a subsequent read or a close.
w.close()
When you are done with a Watcher, be sure to close it. That will release the socket and potentially even return the socket back to the original Papa object. It will also send off a final acknowledge if necessary.
If you use a context manager, the close happens automatically.
if watcher:
A boolean check on the Watcher object will return True if it is still active and False if it has received and acknowledged a close message from all processes it is monitoring.
WARNING: There should be only one
You will get very screwy results if you have multiple watchers for the same process. Each will get the available data, then acknowledge receipt at some point, removing that data from the queue. Based on timing, both will get overlapping results, but neither is likely to get everything.
- Author: Scott Maxwell
- License: MIT
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Console
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Operating System :: MacOS :: MacOS X
- Operating System :: POSIX :: BSD :: FreeBSD
- Operating System :: POSIX :: Linux
- Programming Language :: Python
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: 3.4
- Topic :: Software Development
- Package Index Owner: codecobblers
- DOAP record: papa-0.9.3.xml | https://pypi.python.org/pypi/papa/0.9.3 | CC-MAIN-2017-22 | refinedweb | 3,119 | 57.06 |
Last week was all about documentation and creating a skeleton of our report website as well as improving the UI. This included getting ready the JSDoc style in-code documentation and docs specifying the limits of our system.
<aside> 🎯 **What are we looking to achieve?
2. Publishing NPM Packages Now that we have this Unified Bot Format where we are able to take a Voiceflow file, convert it to this standard format and use our own format to start uploading chatbots to Twilio, we thought it would be best to separate the UI from this logic and turn them into packages developers can use.
3. Working on the Report Website. We only took on trying to get a skeleton of the project website up and running during this week. This will house all our research, how we implemented everything and all relevant documentation.
</aside>
The UI has taken a major part of our time, mostly in order to ensure adequate HCI design. It wasn’t a testing priority for our team. Nonetheless, we have produced thorough unit tests going through and testing each individual page of the UI.
To do these tests we used an open-source JS testing framework called
Jest which acts as an assertion library and gives us the ability to mock certain parts of the UI. The great thing about
Jest is that tests are isolated and can be run in parallel which enables us to create a huge test suite that can be run in seconds.
To modularise our system, we have a singular package that is responsible for taking a Voiceflow file and converting it to our Unified Bot Format.
import {voiceflowToBotFormat} from 'vf-to-ubf'; var fs = require('fs'); voiceflow_diagram = fs.readFileSync("./VoiceflowFile.vf") universal_format = voiceflowToBotFormat(voiceflow_diagram)
All you have to do is pass in your Voiceflow file to the package and the UBF equivalent is returned. This allows you to:
ubf-to-twilio. | https://www.notion.so/dc7243880f4b4807865923462268281e | CC-MAIN-2022-27 | refinedweb | 321 | 60.35 |
[Solved] dll import problem
Hi.
I'm trying to add a library to my project.
In .pro I ve already add the library:
@LIBS += -L$$PWD\libs\Vicon\ -lViconDataStreamSDK_CPP
INCLUDEPATH += $$PWD\libs\Vicon\include@
In .cpp already add the header file:
@#include "Client.h"@
This header has a lot of classes inside of a namespace which i called:
@using namespace ViconDataStreamSDK::CPP;@
Everything seems ok, I call the classes, but when debugging an error occur with dll import:
engine.obj:-1: error: LNK2019: unresolved external symbol "__declspec(dllimport) public: class ViconDataStreamSDK::CPP::Output_Connect __thiscall ViconDataStreamSDK::CPP::Client::Connect(class ViconDataStreamSDK::CPP::String const &)" (_imp?Connect@Client@CPP@ViconDataStreamSDK@@QAE?AVOutput_Connect@23@ABVString@23@@Z) referenced in function "public: void __thiscall Engine::ConnectVicon(void)" (?ConnectVicon@Engine@@QAEXXZ)
engine.obj:-1: error: LNK2019: unresolved external symbol "__declspec(dllimport) public: __thiscall ViconDataStreamSDK::CPP::Client::Client(void)" (_imp??0Client@CPP@ViconDataStreamSDK@@QAE@XZ) referenced in function "public: void __thiscall Engine::ConnectVicon(void)" (?ConnectVicon@Engine@@QAEXXZ)
It seems is connected with this part of my header file Client.h:
@#ifdef _EXPORTING
#define CLASS_DECLSPEC __declspec(dllexport)
#else
#define CLASS_DECLSPEC __declspec(dllimport)
#endif // _EXPORTING@
And I have .dll and the .lib files inside the correct folders.....
Any ideia whats going on?
Thank you.
Are you sure the library is properly named? ViconDataStreamSDK_CPP.lib?
Yes.... can deceive because the CPP but both of them are in the folder of debugging and in the folder of the .pro path
ViconDataStreamSDK_CPP.dll
and
ViconDataStreamSDK_CPP -> Object File Library
Could it be because they were made with Visual Studio C++ and Im using Qt IDE? Any conflict?
Are you sure this lib and dll were built with Visual Studio? Not MinGW/Cygwin and therefore GCC as a compiler?
Not sure, but in the Software Development Kit they just say this:
Windows – C++
Your application should
#include “Client.h”
Link against “ViconDataStreamSDK_CPP.lib”
Redistribute:
“ViconDataStreamSDK_CPP.dll”
“Microsoft.VC8.CRT” (x86) or “Microsoft.VC9.CRT” (x64).
And I've put both the dlls in the correct folders.
Well then you have a "mangling": problem: your DLL was built against an older version of Visual Studio which exported the symbols for C++ in a different way.
You will only be able to link against that DLL with code compiled using:
Visual Studio 2005 aka VC80 for the 32 bits version
Visual Studio 2008 aka VC90 for the 64 bits version
If you use a newer version of Visual Studio, then you will have to get newer versions of that library.
Enjoy the joys of programming on the Windows platform :D
Problem was solved.
With new dll, the problem was solved.
Thank you all
Have the same problem, sent you a pm JLamas. | https://forum.qt.io/topic/20632/solved-dll-import-problem | CC-MAIN-2018-39 | refinedweb | 447 | 50.02 |
#include <wx/txtstrm.h>
This class provides functions that reads text data using an input stream, allowing you to read text, floats, and integers.
The wxTextInputStream correctly reads text files (or streams) in DOS, Macintosh and Unix formats and reports a single newline char as a line ending.
wxTextInputStream: int on 32-bit architectures) so that you cannot use long. To avoid problems (here and elsewhere), make use of wxInt32, wxUint32 and similar types.
If you're scanning through a file using wxTextInputStream, you should check for
EOF before reading the next item (word / number), because otherwise the last item may get lost. You should however be prepared to receive an empty item (empty string / zero number) at the end of file, especially on Windows systems. This is unavoidable because most (but not all) files end with whitespace (i.e. usually a newline).
For example:
Constructs a text stream associated to the given input stream.
Destructor.
Reads a character, returns 0 if there are no more characters in the stream.
Returns a pointer to the underlying input stream object.
Reads a single unsigned byte from the stream, given in base base.
The value of base must be comprised between 2 and 36, inclusive, or be a special value 0 which means that the usual rules of C numbers are applied: if the number starts with
0x it is considered to be in base 16, if it starts with 0 - in base 8 and in base 10 otherwise. Note that you may not want to specify the base 0 if you are parsing the numbers which may have leading zeroes as they can yield unexpected (to the user not familiar with C) results.
Reads a double (IEEE encoded) from the stream.
Reads a line from the input stream and returns it (without the end of line character).
Same as ReadLine().
Reads a word (a sequence of characters until the next separator) from the input stream.
Sets the characters which are used to define the word boundaries in ReadWord().
The default separators are the
space and
TAB characters. | http://docs.wxwidgets.org/3.0/classwx_text_input_stream.html | CC-MAIN-2018-34 | refinedweb | 346 | 71.04 |
I am familiar with how to check if a string contains a substring, and also familiar with how to check if a single letter is a number or a letter, but how would I go about checking a string for any letters?
def letters?(string)
# what do i do here?
end
# string could be anything from '111' to '1A2' to 'AB2589A5' etc...
string = '1A2C35'
if letters?(string) == true
# do something if string has letters
else
# do something else if it doesnt
end
I think, you can try something like it:
def letters?(string) string.chars.any? { |char| ('a'..'z').include? char.downcase } end
If you don't wanna use regexp. This method return
true if there are any letters in the string:
> letters? 'asd' => true > letters? 'asd123' => true > letters? '123' => false | https://codedump.io/share/7u6u8Bh8FTBh/1/how-to-return-true-or-false-when-a-string-contains-any-letters-a-z-or-a-z | CC-MAIN-2017-09 | refinedweb | 131 | 83.05 |
The usage of adding f before Python string
import time
t0 = time.time ()
time.sleep (1)
name = ‘processing’
#Starting with {f} indicates that Python expressions in braces are supported in strings
Print (f ‘{name} done in{ time.time () – t0:.2f} s’)
Output:
processing done in 1.00 s
Why report a mistake
This usage is only used after 3.6. Mine is Python 3.5, which is so direct and simple
resolvent
What’s the way?Of course, it’s anaconda
Read More:
- Java String.split () special character processing
- Python SyntaxError: (unicode error) ‘unicodeescape’ codec can’t decode bytes in position 2-3:
- Python’s direct method for solving linear equations (5) — square root method for solving linear equations
- Detailed explanation of yield in Python — the simplest and clearest explanation
- How to use Python split() function (split array)
- Several methods of executing multiple commands in Linux shell
- An error occurred while processing your request
- ERROR: pygame-1.9.2-cp35-cp35m-win32.whl is not a supported wheel on this platform.
- Python global variables and global keywords
- The processing method after deleting idea’s. IML file by mistake
- Translate() and maketrans() methods of string in Python
- Simple Python crawler exercise: News crawling on sohu.com
- Pandas memory error
- In Python, print() prints to remove line breaks
- Type error: the JSON object must be STR, bytes or byte array, not ‘textiowrapper’
- In Python sys.argv Usage of
- python: HTTP Error 505: HTTP Version Not Supported
- Python: crawler handles strings of XML and HTML
- Python: print syntax error: problem with invalid syntax error
- Solution to Spacy’s failure to load (‘de ‘) or (‘en’) | https://programmerah.com/syntax-error-invalid-syntax-before-python-string-24854/ | CC-MAIN-2021-17 | refinedweb | 271 | 60.75 |
Red Hat Bugzilla – Bug 106252
RFE: EXT3_ACL_MAX_ENTRIES too low
Last modified: 2007-11-30 17:06:58 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4) Gecko/20030709
Description of problem:
Our deployment requires drop boxes for every department, with many users needing
access to most boxes. Because of the 32 group limit (16 in Solaris), we've
typically used ACL's for this application (Solaris 8).
The EXT3 ACL's permit 32 entries per file/dir, only. This is obviously designed
to prevent confusing, slow, or badly thought out security schemes, but 32 is
just too low. JFS-unsupported supports several thousand, Solaris 8 supports
1024. For our purposes, 128 is plenty.
The ACE's apparently need to fit in a single disk block, and maybe part of that
space is being saved for other future things. But this just won't do.
Please help to make EXT3 ACL support include enterprise scalability.
Version-Release number of selected component (if applicable):
kernel-2.4.21-1.1931.2.399.ent
How reproducible:
Always
Steps to Reproduce:
1. Assign 28 ACE's with setfacl
2.
3.
Actual Results: setfacl: <directory>: Invalid argument
Expected Results: Additional ACE's should be applied
Additional info:
--- linux-2.4.20/include/linux/ext3_acl.h.orig Fri Oct 3 17:28:02 2003
+++ linux-2.4.20/include/linux/ext3_acl.h Fri Oct 3 17:28:15 2003
@@ -9,7 +9,7 @@
#include <linux/xattr_acl.h>
#define EXT3_ACL_VERSION 0x0001
-#define EXT3_ACL_MAX_ENTRIES 32
+#define EXT3_ACL_MAX_ENTRIES 128
typedef struct {
__u16 e_tag;
128 is too much for a single disk block on 1k filesystems (though it works on
larger blocksizes.) This is a disk format issue, and there's no way that Red
Hat can deviate from the upstream codebase on this point.
Extending the limit on 4k blocksize filesystems may be possible but would lead
to problems when moving or copying files between filesystems.
And ultimately _any_ limit here is arbitrary, and _somebody_ will think it's too
low. But I'll see what the upstream reaction is.
Andreas Gruenbacher's patch to ease the limit on reading large ext2/3
ACL lists has been accepted upstream. The corresponding patch for
writes has not. I'll bring this up again to see if we can make progress.
This is an on-disk format change, though --- old kernels will be
unable to open files that are created with new kernels that use longer
ACLs. So it's still not something we can do without a great deal of
thought, and upstream acceptance.
Note: For my application, Tim Hockin's patch to remove the NGROUP hard
limit that was merged in 2.6 is even more desirable than raising
EXT3_ACL_MAX_ENTRIES.
Is this something that is still an issue or has everyone gone to
RHEL-4 or can everyone go to RHEL-4? This is addressed in RHEL-4
and the limit there is somewhere just over 500 ACL entries.
Since this issue has not been updated in 18 months, I would propose
closing this as WONTFIX and suggest that people consider using RHEL-4.
Agreed. RHEL4 is OK with me. WONTFIX is acceptable.
Thanx. If this does turn out to be a significant issue for folks,
then we can look at this decision again. It is getting late in the
RHEL-3 lifecycle for changes like this, so perhaps considering a
newer RHEL release might be the better choice in the long run anyway. | https://bugzilla.redhat.com/show_bug.cgi?id=106252 | CC-MAIN-2017-30 | refinedweb | 584 | 65.32 |
#include "avcodec.h"
#include "internal.h"
#include "get_bits.h"
#include "put_bits.h"
#include "wmaprodata.h"
#include "dsputil.h"
#include "wma 100 of file wmaprodec.c.
Referenced by decode_init().
maximum compressed frame size
Definition at line 101 99 of file wmaprodec.c.
Referenced by decode_init(), and decode_tilehdr().
Referenced by dump_context().
Definition at line 113 of file wmaprodec.c.
Referenced by decode_scale_factors().
Definition at line 114 of file wmaprodec.c.
Referenced by decode_scale_factors().
Definition at line 109 of file wmaprodec.c.
Referenced by decode_init(), and decode_scale_factors().
Definition at line 112 of file wmaprodec.c.
Referenced by decode_coeffs().
Definition at line 111 of file wmaprodec.c.
Referenced by decode_coeffs().
Definition at line 110 of file wmaprodec.c.
Referenced by decode_coeffs().
Definition at line 108 of file wmaprodec.c.
log2 of max block size
Definition at line 103 of file wmaprodec.c.
Referenced by decode_init().
maximum block size
Definition at line 104 of file wmaprodec.c.
possible block sizes
Definition at line 105 of file wmaprodec.c.
Referenced by decode_end(), and decode_init().
current decoder limitations
max number of handled channels
Definition at line 98 of file wmaprodec.c.
Referenced by decode_decorrelation_matrix(),5265 250 of file wmaprodec.c.
Decode one WMA frame.
check for potential output buffer overflow
return an error if no frame could be decoded at all 1268 of file wmaprodec.c.
Initialize the decoder.
dump the extradata
generic init
frame info
skip first frame
get frame len
init previous block len
subframe info 266
save the rest of the data so that it can be decoded with the next packet
Definition at line 146067
FIXME: might change run level mode decision
decode quantization step
decode quantization step modifiers for every channel
decode scale factors
parse coefficients
reconstruct the per channel data
inverse quantization and rescaling
apply imdct (ff_imdct_half == DCTIV with reverse)
window and overlapp-add
handled one subframe
Definition at line 1045 of file wmaprodec.c.
Decode the subframe length.
no need to read from the bitstream when only one length is possible
1 bit indicates if the subframe is of maximum length
sanity check the length
Definition at line 461 509 of file wmaprodec.c.
Referenced by decode_frame().
helper function to print the most important members of the context
Definition at line 231 of file wmaprodec.c.
Referenced by decode_init().
Clear decoder buffers (for seeking).
reset output buffer as a part of it is used during the windowing of a new frame
Definition at line 1551 of file wmaprodec.c.
Reconstruct the individual channel data.
multichannel decorrelation
multiply values with the decorrelation_matrix
Definition at line 960 of file wmaprodec.c.
Referenced by decode_subframe().
Calculate remaining input buffer length.
Definition at line 1395 of file wmaprodec.c.
Referenced by decode_packet().07 of file wmaprodec.c.
Referenced by decode_packet().
Apply sine window and reconstruct the output buffer.
Definition at line 1015 of file wmaprodec.c.
Referenced by decode_subframe().
coefficient run length vlc codes
Definition at line 121 of file wmaprodec.c.
scale factor run length vlc
Definition at line 117 of file wmaprodec.c.
scale factor DPCM vlc
Definition at line 116 of file wmaprodec.c.
sinus table for decorrelation
Definition at line 122 of file wmaprodec.c.
Referenced by decode_decorrelation_matrix(), and decode_init().
1 coefficient per symbol
Definition at line 120 of file wmaprodec.c.
2 coefficients per symbol
Definition at line 119 of file wmaprodec.c.
4 coefficients per symbol
Definition at line 118 1567 of file wmaprodec.c. | http://www.ffmpeg.org/doxygen/0.6/wmaprodec_8c.html | CC-MAIN-2016-40 | refinedweb | 567 | 54.08 |
1. Getting started with Java development and the Eclipse IDE
In this tutorial you learn how to get started with the Eclipse IDE to start programming in Java.
You learn how to create a Java project and your first Java class and how to start your program.
It assumes that you already installed and started the Eclipse IDE. It also uses the dark theme from.
2. Create your first Java program
The following section describes how to create a minimal Java application using the Eclipse IDE.
2.1. Create project
Click on the Create a Java project quicklink.
com.vogella.eclipse.ide.first as the project name and press the Finish button to create the project.
Select Create if you are asked to create a module file.
A new project is created and displayed as a folder.
Open the
com.vogella.eclipse.ide.first folder and explore the content of this folder.
2.2. Create package
Create the
com.vogella.eclipse.ide.first package by selecting the
src folder, right-click on it and select .
Press the Finish button.
2.3. Create Java class
Right-click on your package and selectto create a Java class.
Enter
FirstJava as the class name and select the public static void main (String[] args) checkbox.
Press the Finish button.
This creates a new file and opens the Java editor. Change the class based on the following listing.
package com.vogella.eclipse.ide.first; public class FirstJava { public static void main(String[] args) { System.out.println("Hello, World!"); } }
2.4. Run your application code from the IDE
Now run your code. Right-click on your Java class in the Package Explorer or Project Explorer and select.
Eclipse will run your Java program. You should see the output in the Console view.
Congratulations! You created your first Java project, a package, a Java class and you ran this program inside Eclipse. | https://www.vogella.com/tutorials/EclipseJavaIDEGettingStarted/article.html | CC-MAIN-2021-17 | refinedweb | 315 | 68.26 |
PHAT File System
PHAT is then name of the FAT12, FAT16 and FAT32 compatible file system provided by Nut/OS. Currently this is still alpha code and may not work as expected. Beside the fact, that it will probably contain a lot of bugs, the following known problems and limitations exist:
- Tested on Ethernut 3 (ARM7) only.
- Long filenames (VFAT) are not yet supported.
- Only a single sector buffer is used.
- Deleting large files is sometimes extremly slow or seems to fail completely.
- Possibly fails to change back to the root directory. At least this problem appears with the ftpd sample.
- After deleting directories with files using the ftpd sample, running ckdisk or similar tools results in lost+found fragments.
- Write protections are ignored.
The max. performance with MMC cards on Ethernut 3 is 128 kBytes/s for writing and 350 kBytes/s for reading.
Prerequisites
In most cases you need to update the programmable logic on Ethernut 3 to NPL Version 5. Check Ethernut 3 NPL Version about how to do this.
Nut/OS 4.2.1 or later is required.
Nut/OS File System History
Since the very early days Nut/OS came with a lowest-end read-only file system called UROM, which is still in use. It actually had been a quick hack to support HTTP requests in the first place.
Later on Michael Fischer added a FAT file system, which had been successfully used for ATA harddisk and CDROMs with ATAPI interface. Unfortunately it was also read-only and no file system with write access existed over a long period.
Several contributors published other file systems for Nut/OS, like
Xflash by Michael Fischer or
SPIFlashFileSystem by Dusan Ferbas.
Unfortunately no developer with write access to the source code repository added the code to the distribution.
Not too long ago, the PNUT file system had been introduced. It was the first officially released driver with write access and its main purpose is to offer an easy to use high level access to banked memory. Though, as it uses RAM, all stored information is gone after cycling the power supply.
An interesting idea had been contributed by Maarten van Heesch, named XPNUT. It creates a copy of the PNUT file system in a serial flash chip. Upon initialization, the flashed copy is transfered back to RAM.
Using PHAT
The modularity of Nut/OS requires to register all drivers that will be used by the application. Calling NutRegisterDevice() with a pointer to the NUTDEVICE information structure of the device we want to use creates a reference to the device driver. When the application is later linked to the Nut/OS libraries, then the driver code of the referenced devices is added to the final binary that runs on the target board. Drivers, which are not registered, will not be part of the binary image. This is true for hardware drivers as well as file system drivers, as Nut/OS handles both in a similar way.
The PHAT file system does not directly access the hardware. It needs a block device driver attached to it, which provides the low level block read and write access. At the time of this writing only one block device driver is available, which supports Multimedia and SD Cards in SPI mode.
Here's how an application registers the PHAT file system and the block device driver.
#include <dev/nplmmc.h> #include <fs/phatfs.h> /* Register the PHAT file system. */ if (NutRegisterDevice(&devPhat0, 0, 0)) { /* Handle error */ } /* Register the MMC block device. */ if (NutRegisterDevice(&devNplMmc0, 0, 0)) { /* Handle error */ }
After the drivers had been successfully registered, the application can take the next step: Mounting a volume. From here on we forget about any NUTDEVICE structure. As soon as the device is registered, Nut/OS knows its name, which is stored inside the NUTDEVICE structure. Application code uses this name instead of the NUTDEVICE structure itself. The idea behind this is, that coming versions will completely ban the NUTDEVICE structure from application code and a special initialization module will be used to register the correct devices.
Mass storage devices may contain more than one volume, also known as partitions. Thus, the application must specify three items to mount a volume:
- The block device to use.
- The volume to mount.
- The file system to use.
#include <stdio.h> #include <fcntl.h> int hvol; /* Mount partition. */ if ((hvol = _open("MMC0:1/PHAT0", _O_RDWR | _O_BINARY)) == -1) { /* Handle error */ }
Note, that the first partition is "1". Partition "0" is a special case and will mount the first primary partition that is marked active. We may even use the shortest form
hvol = _open("MMC0:", _O_RDWR | _O_BINARY);
The value returned by _open() can be used later to unmount the volume. Again there is no specific call available and _close() is used instead.
_close(hvol);
After a volume has been successfully mounted, standard I/O calls can be used to access its contents, typically files and directories. As there is no support right now for a current working path, applications have to specify the full path including the name of the file system in front. The following sample opens a file for write access, writes a simple text line to it and close the file afterwards. If the file doesn't exist, it will be created.
#include <stdio.h> FILE *fp; fp = fopen("PHAT0:/simple.txt", "w"); fprintf(fp, "First line in this file\n"); fclose(fp);
Special functions are provided to read directories, namely opendir(), readdir() and closedir().
New subdirectories can be created with mkdir() and existing subdirectories may be removed by calling rmdir().
FTP Server Sample
When compiling app/ftpd/ftpserv.c for Ethernut 3, then the PHAT file system is used by default. A precompiled binary is included in this archive.
PHAT Driver Internals
Buffering
The driver is prepared to handle its own sector buffers. However, in the current version it uses the single sector buffer provided by the block device driver.
The single buffer makes any kind of write access very slow. However, read access times are acceptable and using the buffer of the block device driver allows to create minimal systems with ATA interfaces, which share address and/or data bus with I/O ports and do not allow to access external memory during ATA transfers. Such a candidate would be an ATmega128 CPU with IDE CDROM or harddisk, where the sector buffer is located in the on-chip RAM.
Anyway, later versions of the PHAT driver will probably have to use a more advanced buffering.
Driver Modules
Compared the other Nut/OS drivers the PHAT file system driver is quite large and split into several source code modules.
- phatfs.c
Contains the device structure devPhat0, the main driver routines, as well as cluster allocation routines. The device structure has the following format:The same structure is used in Nut/OS for all kind of devices.
struct _NUTDEVICE { NUTDEVICE *dev_next; u_char dev_name[9]; u_char dev_type; uptr_t dev_base; u_char dev_irq; void *dev_icb; void *dev_dcb; int (*dev_init) (NUTDEVICE *); int (*dev_ioctl) (NUTDEVICE *, int, void *); int (*dev_read) (NUTFILE *, void *, int); int (*dev_write) (NUTFILE *, CONST void *, int); int (*dev_write_P) (NUTFILE *, PGM_P, int); NUTFILE * (*dev_open) (NUTDEVICE *, CONST char *, int, int); int (*dev_close) (NUTFILE *); long (*dev_size) (NUTFILE *); };
- dev_next
points to the next device. All registered devices are linked by this pointer, which is NULL for the last device.
- dev_name
stores the symbolic name of the device.
- dev_type
contains the type of the interface, which this driver provides. For file systems the type IFTYP_FS is used.
- dev_base and dev_irq
are not used by file system drivers.
- dev_icb
is used by the PHAT device to store the NUTFILE handle of the associated block device.
- dev_dcb
is used by the PHAT device to store the pointer to its volume information structure (see below).
The remaining structure elements contain pointers to the basic file system functions.
- dev_init, points to PhatInit
Applications need to register the file system driver with NutRegisterDevice(&devPhat0, 0, 0), which in turn calls this routine.
- dev_open, points to PhatFileOpen
This low level routine opens a file and is called by the C runtime library functions like _open(). In addition to open normal files, it is used by the module phatdir.c to open directories
- dev_close, points to PhatFileClose
Closes a normal file or directory, which had been previously opened by PhatFileOpen().
- dev_read, points to PhatFileRead
Reads data from a file or directory previously opened by PhatFileOpen().
- dev_write, points to PhatFileWrite
Writes data to a file or directory previously opened by PhatFileOpen().
- dev_ioctl, points to PhatIOCtl
- FS_STATUS
Queries the status of a directory entry. Used by the C runtime function stat().
- FS_DIR_CREATE
Creates a subdirectory entry. Used by the C runtime function mkdir().
- FS_DIR_REMOVE
Deletes a subdirectory entry. Used by the C runtime function rmdir().
- FS_DIR_OPEN
Opens a directory. Used by the C runtime function opendir().
- FS_DIR_CLOSE
Closes a directory. Used by the C runtime function closedir().
- FS_DIR_READ
Reads the next entry of a previously opened directory. Used by the C runtime function readdir().
- FS_FILE_STATUS
Queries the status of a previously opened file. Not implemented.
- FS_FILE_DELETE
Deletes a file. Used by the C runtime function unlink().
- FS_FILE_SEEK
Sets a file pointer. Used by the C runtime function _seek().
- FS_RENAME
Renames a file. Implemented, but currently not used.
- FS_VOL_MOUNT
Mounts a FAT volume. This function is called by the block device driver.
- FS_VOL_UNMOUNT
Unmounts a FAT volume. This function is called by the block device driver.
- phatvol.c
This modules contains the routines to mount and unmount a volume.
struct _PHATVOL { int vol_type; u_long vol_numfree; u_long vol_nxtfree; u_char *vol_buf; u_long vol_bufsect; int vol_bufdirty; u_int vol_sectsz; u_int vol_clustsz; u_long vol_tabsz; u_long vol_tab_sect[2]; u_int vol_rootsz; u_long vol_root_sect; u_long vol_root_clust; u_long vol_last_clust; u_long vol_data_sect; };
- phatdir.c
Contains routines to create, remove, rename, search, read and update directory entries.
- phat12.c
phat16.c
phat32.c
These modules contain FAT12, FAT16 or FAT32 specific routines to set, query and release cluster links.
- phatio.c
The routines in this modules are used to load or unload sector buffers. Currently the PHAT driver doesn't provide its own buffering but uses the (single) buffer of the block device.
- phatutil.c
Contains various helper routines.
- phatdbg.c
Useful routines for debugging the file system driver. | http://www.ethernut.de/en/documents/phat.html | CC-MAIN-2022-21 | refinedweb | 1,703 | 57.37 |
Usage
Typescript Import Format
//To import this class, use the format below.
import {urlPathAdapter} from "ojs/ojrouter";
For additional information visit:
Oracle® JavaScript Extension Toolkit (JET)
10.0.0
F32683-01
Url adapter used by the oj.Router to manage URL in the form of
/book/chapter2.
The UrlPathAdapter is the default adapter used by the router as it makes more human-readable URLs, is user-friendly, and less likely to exceed the maximum charaacter limit in the browser URL.
Since this adapter generates path URLs, it's advisable that your application be able to restore the page should the user bookmark or reload the page. For instance, given the URL
/book/chapter2, your
application server ought to serve up content for "chapter2" if the user should
bookmark or reload the page. If that's not possible, then consider using the
urlParamAdapter.
There are two available URL adapters, this one and the urlParamAdapter.
To change the URL adapter, use the urlAdapter property.
//To import this class, use the format below.
import {urlPathAdapter} from "ojs/ojrouter";
For additional information visit: | https://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.Router.urlPathAdapter.html | CC-MAIN-2021-10 | refinedweb | 179 | 65.83 |
So after a while I finally figured out the syntax to make a pretty simple password program. While I was at it, I thought I might try to run another method from a different class in the password program. Thing is, there were no errors, but the other methods would simply just not work. What is the proper syntax for getting a method from another class to work properly?
Here's the short little bit of code that Im trying to get the method to work in:
package OtherStuff; import acm.program.ConsoleProgram; import java.util.*; public class stringPassword extends ConsoleProgram { String password = "bob"; public void run() { ageCalc acalc = new ageCalc(); while (true) { String input = readLine("Enter password here: "); if (input.equals(password)) { println("Access granted\n"); acalc.startAgeCalc(); break; } else println("Access denied\n"); } } }
The other class is called obviously called ageCalc and the method im trying to run is startAgeCalc();. Everything in startAgeCalc() is public, and besides that I can't think of any other reason why this wouldnt work. But then again, I would have no idea in the first place as this will be my first time trying to call on a method from another class.
Anyway, the solution is most likely simple, but I can't seem to figure it out. Could importing and extending ConsoleProgram be a playing factor? Thx in advance for the newbie help! | http://www.javaprogrammingforums.com/whats-wrong-my-code/3166-re-using-methods.html | CC-MAIN-2016-36 | refinedweb | 232 | 64.1 |
Analyzing Data
)"
Each line has the following components:
IP address from which the request was made.
Two fields (represented with - characters) having to do with authentication.
The timestamp.
The HTTP request, starting with the HTTP request method (usually
GETor
The result code, in which 200 represents "OK".
The number of bytes transferred.
The referrer, meaning the URL that the user came from.
The way in which the browser identifies itself.
This information might seem a bit primitive and limited, but you can use it to understand a large number of factors better having to do with visitors to your blog. Note that it doesn't include information that JavaScript-based analytics packages (for example, Google Analytics) can provide, such as session, browser information and cookies. Nevertheless, logfiles can provide you with some good basics.
Two of the first steps of any data science project are 1) importing the data and 2) cleaning the data. That's because any data source will have information that's not really useful or relevant for your purposes, which will throw off the statistics or add useless bloat to the data you're trying to import. Thus, here I'm going to try to read the Apache logfile into Python, removing those lines that are irrelevant. Of course, what is deemed to be "irrelevant" is somewhat subjective; I'll get to that in just a bit.
Let's start with a very simple parsing of the Apache logfile. One of the first things Python programmers learn is how to iterate over the lines of a file:
infile = 'short-access-log' for line in open(infile): print(line)
The above will print the file, one line at a time. However, for this example, I'm not interested in printing it; rather, I'm interested in turning it into a CSV file. Moreover, I want to remove the lines that are less interesting or that provide spurious (junk) data.
In order to create a CSV file, I'm going to use the
csv module that
comes with Python. One advantage of this module is that it can take
any separator; despite the name, I prefer to use tabs between my
columns, because there's no chance of mixing up tabs with the data I'm
passing.
But, how do you get the data from the logfile into the CSV module? A
simple-minded way to deal with this would be to break the input string
using the
str.split method. The good news is that split will work, at
least to some degree, but the bad news is that it'll parse things
far less elegantly than you might like. And, you'll end up with all
sorts of crazy stuff going on.
The bottom line is that if you want to read from an Apache logfile,
you'll need to figure out the logfile format and read it, probably
using a regular expression. Or, if you're a bit smarter, you can use an
existing library that already has implemented the regexp and logic.
I searched on PyPI (the Python Package Index) and found clfparser, a
package that knows how to parse Apache logfiles in what's known as the
"common logfile format" used by a number of HTTP servers for many
years. If the variable
line contains one line from my Apache
logfile, I can do the following:
from clfparser import CLFParser infilename = 'short-access-log' for line in open(infilename): print CLFParser.logDict(line)
In this way, I have turned each line of my logfile into a Python dictionary, with each key-value pair in the dictionary referencing a different field from my logfile's row.
Now I can go back to my CSV module and employ the DictWriter class that comes with it. DictWriter, as you probably can guess, allows you to output CSV based on a dictionary. All you need to do is declare the fields you want, allowing you to ignore some or even to set their order in the resulting CSV file. Then you can iterate over your file and create the CSV.
Here's the code I came up with:
import csv from clfparser import CLFParser infilename = 'short-access-log' outfilename = 'access.csv': writer.writerow(CLFParser.logDict(line))
Let's walk through this code, one piece at a time. It's not very complex, but it does pull together a number of packages and functionality that provide a great deal of power in a small space:
First, I import both the
csvmodule and the CLFParser class from the
clfparsermodule. I'm going to be using both of these modules in this program; the first will allow me to output CSV, and the second will let me read from the Apache logs.
I set the names of the input and output files here, both to clean up the following code a bit and to make it easier to reuse this code later.
I then use the
withstatement, which invokes what's known as a "context manager" in Python. The basic idea here is that I'm creating two file objects, one for reading (the logfile) and one for writing (the CSV file). When the
withblock ends, both files will be closed, ensuring that no data has been left behind or is still in a buffer.
Given that I'm going to be using the CSV module's DictWriter, I need to indicate the order in which fields will be output. I do this in a list; this list allows allow me to remove or reorder fields, should I want to do so.
I then create the csv.DictWriter object, telling it that I want to write data to outfile, using the field names I just defined and using tab as a delimiter between fields.
I then write a header to the file; although this isn't crucial, I recommend that you do so for easier debugging later. Besides, all CSV parsers that I know of are able to handle such a thing without any issues.
Finally, I iterate over the rows of the access log, turning each line into a dictionary and then writing that dictionary to the CSV file. Indeed, you could argue that the final line there is the entire point of this program; everything up to that point is just a preface.. | https://www.linuxjournal.com/content/analyzing-data?page=0,2&quicktabs_1=2 | CC-MAIN-2018-17 | refinedweb | 1,055 | 67.08 |
Play Scala's Anorm, Heroku and PostgreSQL Issues
Play Scala's Anorm, Heroku and PostgreSQL Issues
Join the DZone community and get the full member experience.Join For Free
“I love writing authentication and authorization code.” ~ No Developer Ever. Try Okta instead
This article is the 5th in a series on about my adventures developing a Fitness Tracking application for my talk at Devoxx in two weeks.
Anorm
In my previous article, I described how I created my application's features using CoffeeScript and make it look good using Twitter's Bootstrap. Next, I turned to persisting this data with Anorm.
The Scala module includes a brand new data access layer called Anorm that uses plain SQL to make your database request and provides several API to parse and transform the resulting dataset.
I'm a big fan of ORMs like Hibernate and JPA, so having to learn a new JDBC abstraction wasn't exactly appealing at first. However, since Anorm is the default for Play Scala, I decided to try it. The easiest way for me to learn Anorm was to start coding with it. I used A first iteration for the data model as my guide and created model objects, companion objects that extended Magic (appropriately named) and wrote some tests using scalatest. I started with an "Athlete" model since I knew "User" was a keyword in PostgreSQL and that's what Heroku uses for its database.
package models import play.db.anorm._ import play.db.anorm.defaults._ case class Athlete( id: Pk[Long], email: String, password: String, firstName: String, lastName: String ) { } object Athlete extends Magic[Athlete] { def connect(email: String, password: String) = { Athlete.find("email = {email} and password = {password}") .on("email" -> email, "password" -> password) .first() } def apply(firstName: String) = new Athlete(NotAssigned, null, null, firstName, null) }
Then I wrote a couple tests for it in test/Tests.scala.
import play._ import play.test._ import org.scalatest._ import org.scalatest.junit._ import org.scalatest.matchers._ class BasicTests extends UnitFlatSpec with ShouldMatchers with BeforeAndAfterEach { import models._ import play.db.anorm._ override def beforeEach() { Fixtures.deleteDatabase() } it should "create and retrieve a Athlete" in { var athlete = Athlete(NotAssigned, "jim@gmail.com", "secret", "Jim", "Smith") Athlete.create(athlete) val jim = Athlete.find( "email={email}").on("email" -> "jim@gmail.com" ).first() jim should not be (None) jim.get.firstName should be("Jim") } it should "connect a Athlete" in { Athlete.create(Athlete(NotAssigned, "bob@gmail.com", "secret", "Bob", "Johnson")) Athlete.connect("bob@gmail.com", "secret") should not be (None) Athlete.connect("bob@gmail.com", "badpassword") should be(None) Athlete.connect("tom@gmail.com", "secret") should be(None) }
At this point, everything was fine and dandy. I could run "play test", open in my browser and run the tests to see a beautiful shade of green on my screen. I continued following the tutorial, substituting "Post" with "Workout" and added Comments too. The Workout object shows some of the crazy-ass syntax that is Anorm getting fancy with Scala.
object Workout extends Magic[Workout] { def allWithAthlete: List[(Workout, Athlete)] = SQL( """ select * from Workout w join Athlete a on w.athlete_id = a.id order by w.postedAt desc """ ).as(Workout ~< Athlete ^^ flatten *) def allWithAthleteAndComments: List[(Workout, Athlete, List[Comment])] = SQL( """ select * from Workout w join Athlete a on w.athlete_id = a.id left join Comment c on c.workout_id = w.id order by w.postedAt desc """ ).as(Workout ~< Athlete ~< Workout.spanM(Comment) ^^ flatten *) def byIdWithAthleteAndComments(id: Long): Option[(Workout, Athlete, List[Comment])] = SQL( """ select * from Workout w join Athlete a on w.athlete_id = a.id left join Comment c on c.workout_id = w.id where w.id = {id} """ ).on("id" -> id).as(Workout ~< Athlete ~< Workout.spanM(Comment) ^^ flatten ?) }
All of these methods return Tuples, which is quite different from an ORM that returns an object that you call methods on to get its related items. Below is an example of how this is referenced in a Scalate template:
-@ val workout:(models.Workout,models.Athlete,Seq[models.Comment]) - var commentsTitle = "No Comments" if (workout._3.size > 0) commentsTitle = workout._3.size + " comments, lastest by " + workout._3(workout._3.size - 1).author div(class="workout") h2.title a(href={action(controllers.Profile.show(workout._1.id()))}) #{workout._1.title} .metadata span.user Posted by #{workout._2.firstName} on span.date #{workout._1.postedAt} .description = workout._1.description
Evolutions on Heroku
I was happy with my progress until I tried to deploy my app to Heroku. I added db=${DATABASE_URL} to my application.conf as recommended by Database-driven web apps with Play! on Heroku/Cedar. However, when I deployed, it failed because my database tables weren't created.
2011-10-05T04:08:52+00:00 app[web.1]: 04:08:52,712 WARN ~ Your database is not up to date. 2011-10-05T04:08:52+00:00 app[web.1]: 04:08:52,712 WARN ~ Use `play evolutions` command to manage database evolutions. 2011-10-05T04:08:52+00:00 app[web.1]: 04:08:52,713 ERROR ~ 2011-10-05T04:08:52+00:00 app[web.1]: 2011-10-05T04:08:52+00:00 app[web.1]: @681m15j3l 2011-10-05T04:08:52+00:00 app[web.1]: Can't start in PROD mode with errors 2011-10-05T04:08:52+00:00 app[web.1]: 2011-10-05T04:08:52+00:00 app[web.1]: Your database needs evolution! 2011-10-05T04:08:52+00:00 app[web.1]: An SQL script will be run on your database. 2011-10-05T04:08:52+00:00 app[web.1]: 2011-10-05T04:08:52+00:00 app[web.1]: play.db.Evolutions$InvalidDatabaseRevision
With James Ward's help, I learned I needed to use "heroku run" to apply evolutions. So I ran the following command:
heroku run "play evolutions:apply --%prod"
Unfortunately, this failed:
Running play evolutions:apply --%prod attached to terminal... up, run. 5 ~ _ _ ~ _ __ | | __ _ _ _| | ~ | '_ \| |/ _' | || |_| ~ | __/|_|\____|\__ (_) ~ |_| |__/ ~ ~ play! 1.2.3, ~ framework ID is prod ~ Oct 17, 2011 7:05:46 PM play.Logger warn WARNING: Cannot replace DATABASE_URL in configuration (db=$ {DATABASE_URL}) Exception in thread "main" java.lang.NullPointerException at play.db.Evolutions.main(Evolutions.java:54)
After opening a ticket with Heroku support, I learned this was because DATABASE_URL was not set ("heroku config" shows your variables). Apparently, this should be set when you create your app, but somehow wasn't for mine. To fix, I had to run the following command:
$ heroku pg:promote SHARED_DATABASE -----> Promoting SHARED_DATABASE to DATABASE_URL... done
PostgreSQL and Dates
The next issue I ran into was with loading default data. I have the following BootStrap.scala class in my project to load default data:) } } } } }
For some reason, only my "athlete" table was getting populated and the others weren't. I tried turning on debugging and trace, but nothing showed up in the logs. This appears to be a frequent issue with Play. When data fails to load, there's no logging indicating what went wrong. To make matters worse with Anorm, there's no way to log the SQL that it's attempting to run. My BootStrap job was working fine when connecting to "db=mem", but stopped after switching to PostgreSQL. The support I got for this issue was disappointing, since it caused crickets on Play's Google Group. I finally figured out "support of Date for insertion" was added to Anorm a couple months ago.
To get the latest play-scala code into my project, I cloned play-scala, built it locally and uploaded it to my server. Then I added the following to dependencies.yml and ran "play deps --sync".
require: ... - upgrades -> scala 0.9.1-20111025 ... repositories: - upgrades: type: http artifact: "[module]-[revision].zip" contains: - upgrades -> *
Summary
When I started writing this article, I was going to talk about some improvements I made to Scalate Play interoperability. However, I think I'll save that for next time and possibly turn it into a plugin using play-excel as an example.
As you can tell from this article, my experience with Anorm was frustrating - particularly due to the lack of error messages when operations failed. The lack of support was expected, as this usually happens when you're living on the bleeding edge. However, based on this experience, I can't help but think that it might be a while before Play 2.0 is ready for production use.The good news is IntelliJ is adding support for Play. Maybe this will help increase adoption and inspire the framework's developers to stabilize and improve Play Scala before moving the entire framework to Scala. After all, it seems they've encountered some issues making Scala as fast as Java.
From
“I love writing authentication and authorization code.” ~ No Developer Ever. Try Okta instead
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/play-scalas-anorm-heroku-and | CC-MAIN-2018-22 | refinedweb | 1,502 | 59.8 |
IMPORTANT UPDATE: Chris has shown a much easier way to do this than I originally outlined, so I have replaced my original notes with notes from his sample repo. There's a build speed tradeoff, I discuss alternatives at bottom.
On the latest Toolsday, Chris Dhanaraj said he had trouble finding documentation for adding Tailwind to Svelte.
Today I also needed to add Tailwind to a Svelte project, so I am writing this as a reference for myself. Setting up PostCSS with Svelte is something I have documented on the new Svelte Society site, but of course it could be better and more specifically tailored to Tailwind (which after all is "just" a PostCSS plugin).
So I am writing this for him and for me.
A quick aside on WHY Use Tailwind with Svelte, since Svelte offers scoped CSS by default: Tailwind offers a nicely constrained "design system" so you don't overuse Magic Numbers and it's easy to add responsive styling with Tailwind breakpoints. Because Tailwind has the developer experience of "inline styles", I also find it easier to delete and move HTML around without having to go back for the styling. I also like not having to name classes. I discuss more on Why Tailwind in general in a separate post.
3 Steps
I will assume you have a standard existing Svelte or Sapper project with no PostCSS/Tailwind set up. I'll also add in
autoprefixer and
postcss-nesting since I like to work with those, but of course feel free to remove as needed.
Step 1: Install deps
npm install -D svelte-preprocess tailwindcss autoprefixer postcss-nesting # optional tailwind ui plugin npm install @tailwindcss/ui
Step 2: Setup Config Files
Add a
tailwind.config.js file at the project root:
// tailwind.config.js const production = !process.env.ROLLUP_WATCH; // or some other env var like NODE_ENV module.exports = { future: { // for tailwind 2.0 compat purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, }, plugins: [ // for tailwind UI users only require('@tailwindcss/ui'), // other plugins here ], purge: { content: [ "./src/**/*.svelte", // may also want to include base index.html ], enabled: production // disable purge in dev }, };
And now set it up inside of your Svelte bundler config as well:
import sveltePreprocess from "svelte-preprocess"; const production = !process.env.ROLLUP_WATCH; export default { plugins: [ svelte({ // etc... preprocess: sveltePreprocess({ // sourceMap: !production, postcss: { plugins: [ require("tailwindcss"), require("autoprefixer"), require("postcss-nesting") ], }, }), }), ] }
Here is the equivalent for Svelte with Webpack:
// webpack.config.js const sveltePreprocess = require('svelte-preprocess'); module.exports = { // ... module: { rules: [ { test: /\.svelte$/, use: { loader: 'svelte-loader', options: { emitCss: true, hotReload: true, preprocess: sveltePreprocess({ // sourceMap: !prod, postcss: { plugins: [ require("tailwindcss"), // require("autoprefixer"), require("postcss-nesting") ], }, }), } } }, }
Step 3: Add the Tailwind includes to your Svelte App
Typically a Svelte app will have a way to inject css already, so all we do is piggyback onto that. You'll want to put these includes at a reasonably high level, say
App.svelte or
Layout.svelte component that will be included in every page of your site.
<style global /* only apply purgecss on utilities, per Tailwind docs */ /* purgecss start ignore */ @tailwind base; @tailwind components; /* purgecss end ignore */ @tailwind utilities; </style>
And that's it!
Note: this section used to involve messing with
package.jsonscripts to run
postcss-cli, but Chris realized that you didn't need to do any of this since Svelte already has a way to inject CSS and
svelte-preprocessalready runs on every Svelte file.
Please see:
To see this working in action.
Unresolved
Svelte has a
class: binding syntax that isnt' supported by Tailwind out of the box. There is an open discussion for this.
Alternative Approaches
This method outlined above is simple to get running, but does end up running thousands of lines of Tailwind's CSS through the Svelte compiler. This may cause performance issues (primarily, every time you change the entry point file). Alternative approaches may be more appropriate depending on your preferences:
- Jacob Babich: "I'm moving to running the global css builder in parallel with a reimplementation of postcss-cli (just so I can have source maps controlled by a variable in rollup.config.js) but without getting that extreme you can just use npm-run-all with postcss-cli"
- dominikg: "The easiest way to setup tailwind with svelte:
npx svite create -t postcss-tailwind my-svelte-tailwind-project"
-
-
-
-
Discussion
Here is another similar tutorial for Svelte and Sapper that I have written a while ago: dev.to/sarioglu/using-svelte-with-...
It also includes template repositories for both.
added, cheers. i think mine is a bit simpler tho
just curious, why the purge css ignore? don't you want purge CSS to remove unused styles from the base, or is purge css too aggressive?
Thanks for the article!
no reason - i'll be honest i blindly copied it from github.com/chrisdhanaraj/svelte-ta... but you're right, it's not needed, i took it out
For context, I pulled that one from Tailwind's docs on Controlling File Size.
tailwindcss.com/docs/controlling-f...
dammit, lol nicely done. will add back with a note
Nice one @swyx perhaps you can add some tags #svelte #tailwind ?
oh word lol
In case anyone needs to setup Sapper + Tailwind one day 🙂
sapper-with-postcss-and-tailwind.v...
added, cheers | https://dev.to/swyx/how-to-set-up-svelte-with-tailwind-css-4fg5/ | CC-MAIN-2020-50 | refinedweb | 876 | 55.03 |
slacker alternatives and similar packages
Based on the "Networking" category
ejabberd10.0 9.4 slacker VS ejabberdRobust, ubiquitous and massively scalable Jabber/XMPP Instant Messaging platform.
socket9.5 0.0 slacker VS socketSocket wrapping for Elixir.
ExIrc7.8 2.4 slacker VS ExIrcIRC client adapter for Elixir projects.
sshkit7.3 4.7 slacker VS sshkitAn Elixir toolkit for performing tasks on one or more servers, built on top of Erlang’s SSH application.
sshex7.2 2.0 slacker VS sshexSimple SSH helpers for Elixir.
hedwig6.8 0.0 slacker VS hedwigXMPP Client/Bot Framework for Elixir.
reagent6.6 0.0 slacker VS reagentreagent is a socket acceptor pool for Elixir.
kaguya6.1 0.0 slacker VS kaguyaA small, powerful, and modular IRC bot.
download5.5 0.0 slacker VS downloadDownload files from the internet easily.
yocingo5.2 0.0 slacker VS yocingoCreate your own Telegram Bot.
SftpEx4.9 0.0 slacker VS SftpExElixir library for streaming data through SFTP
wifi4.6 0.0 slacker VS wifiVarious utility functions for working with the local Wifi network in Elixir.
chatty4.5 0.0 slacker VS chattyA basic IRC client that is most useful for writing a bot.
ExPcap4.4 1.8 slacker VS ExPcapPCAP parser written in Elixir.
chatter3.8 0.0 slacker VS chatterSecure message broadcasting based on a mixture of UDP multicast and TCP.
eio3.8 0.0 slacker VS eioElixir server of engine.io.
Guri3.2 0.0 slacker VS GuriAutomate tasks using chat messages.
tunnerl2.8 0.0 slacker VS tunnerlSOCKS4 and SOCKS5 proxy server.
pool1.6 0.0 slacker VS poolSocket acceptor pool for Elixir.
torex1.4 0.0 slacker VS torexSimple Tor connection library.
mac1.3 0.0 slacker VS macCan be used to find a vendor of a MAC given in hexadecimal string (according to IEEE).
asn1.1 0.0 slacker VS asnCan be used to map from IP to AS to ASN.
wpa_supplicant1.0 0.0 L4 slacker slacker or a related project?
Popular Comparisons
README
Slacker
Slacker's an Elixir bot library for Slack.
It has chat matching functionality built-in, but you can extend it to handle all kinds of events.
Chat
Slacker can match regex or literal strings, then execute a given function (module optional).
defmodule TARS do use Slacker use Slacker.Matcher match ~r/Sense of humor\. New level setting: ([0-9]+)%/, :set_humor match "Great idea. A massive, sarcastic robot.", [CueLight, :turn_on] def set_humor(tars, msg, level) do reply = "Sense of humor set to #{level}" say tars, msg["channel"], reply end end
Slacker will call your function with the matching message hash. You can use
say/3 to respond, be sure to include the channel you want to talk to.
Extending Slacker
Your robot is really just a
GenServer, you can catch RTM events from Slack and do whatever you like with them.
defmodule CASE do use Slacker def handle_cast({:handle_incoming, "presence_change", msg}, state) do say self, msg["channel"], "You're the man who brought us the probe?" {:noreply, state} end end
You can also use Slack's "Web API" via the
Slacker.Web module. All of the available RPC methods are downcased and underscored.
users.getPresence ->
Slacker.Web.users_get_presence("your_api_key", user: "U1234567890")
Bootin' it up
Add this to your deps:
def deps do [{:websocket_client, github: "jeremyong/websocket_client"}, {:slacker, "~> 0.0.3"}] end
Create a bot user in the Slack GUI, and then pass your api token to your bot's
start_link/1:
{:ok, tars} = TARS.start_link("your_api_token")
It's up to you to supervise your brand new baby bot.
You're going to need to invite your bot to a channel by
@-mentioning them.
Contributing
Gimme dem PR's.
Some of this stuff is a real pain in the ass to test, just do your best. :rocket:
TODO:
- Keep a map of usernames to ids.
- Keep a map of channel names to ids.
- Private messaging support.
- RTM tests.
License
See the LICENSE file. (MIT)
*Note that all licence references and agreements mentioned in the slacker README section above are relevant to that project's source code only. | https://elixir.libhunt.com/slacker-alternatives | CC-MAIN-2020-10 | refinedweb | 678 | 67.25 |
table of contents
NAME¶qio - Quick I/O routines for reading files
SYNOPSIS¶
#include <inn/qio.h> QIOSTATE *QIOopen(const char *name); QIOSTATE *QIOfdopen(int> I<fd); void QIOclose(QIOSTATE *qp); char *QIOread(QIOSTATE *qp); int QIOfileno(QIOSTATE *qp); size_t QIOlength(QIOSTATE *qp); int QIOrewind(QIOSTATE *qp); off_t QIOtell(QIOSTATE *qp); bool QIOerror(QIOSTATE *qp); bool QIOtoolong(QIOSTATE *qp);
DESCRIPTION¶The routines described in this manual page are part of libinn(3). They are used to provide quick read access to files; the QIO routines use buffering adapted to the block size of the device, similar to stdio, but with a more convenient syntax for reading newline-terminated lines. QIO is short for "Quick I/O" (a bit of a misnomer, as QIO provides read-only access to files only).
The QIOSTATE structure returned by QIOopen and QIOfdopen is the analog to stdio's FILE structure and should be treated as a black box by all users of these routines. Only the above API should be used.
QIOopen opens the given file for reading. For regular files, if your system provides that information and the size is reasonable, QIO will use the block size of the underlying file system as its buffer size; otherwise, it will default to a buffer of 8 KB. Returns a pointer to use for subsequent calls, or NULL on error. QIOfdopen performs the same operation except on an already-open file descriptor (fd must designate a file open for reading).
QIOclose closes the open file and releases any resources used by the QIOSTATE structure. The QIOSTATE pointer should not be used again after it has been passed to this function.
QIOread reads the next newline-terminated line in the file and returns a pointer to it, with the trailing newline replaced by nul. The returned pointer is a pointer into a buffer in the QIOSTATE object and therefore will remain valid until QIOclose is called on that object. If EOF is reached, an error occurs, or if the line is longer than the buffer size, NULL is returned instead. To distinguish between the error cases, use QIOerror and QIOtoolong.
QIOfileno returns the descriptor of the open file.
QIOlength returns the length in bytes of the last line returned by QIOread. Its return value is only defined after a successful call to QIOread.
QIOrewind sets the read pointer back to the beginning of the file and reads the first block of the file in anticipation of future reads. It returns 0 if successful and -1 on error.
QIOtell returns the current value of the read pointer (the lseek(2) offset at which the next line will start).
QIOerror returns true if there was an error in the last call to QIOread, false otherwise. QIOtoolong returns true if there was an error and the error was that the line was too long. If QIOread returns NULL, these functions should be called to determine what happened. If QIOread returned NULL and QIOerror is false, EOF was reached. Note that if QIOtoolong returns true, the next call to QIOread will try to read the remainder of the line and will likely return a partial line; users of this library should in general treat long lines as fatal errors.
EXAMPLES¶This block of code opens /etc/motd and reads it a line at a time, printing out each line preceded by its offset in the file.
QIOSTATE *qp; off_t offset; char *p; qp = QIOopen("/etc/motd"); if (qp == NULL) { perror("Open error"); exit(1); } for (p = QIOread(qp); p != NULL; p = QIOread(qp)) printf("%ld: %s\n", (unsigned long) QIOtell(qp), p); if (QIOerror(qp)) { perror("Read error"); exit(1); } QIOclose(qp);
HISTORY¶Written by Rich $alz <rsalz@uunet.uu.net> for InterNetNews. Updated by Russ Allbery <eagle@eyrie.org>.
$Id: qio.pod 9767 2014-12-07 21:13:43Z iulius $ | https://manpages.debian.org/testing/inn2-dev/qio.3.en.html | CC-MAIN-2020-10 | refinedweb | 641 | 70.23 |
I want to start my own tile server, I used this manual to set up my machine. And everything is working with a small country. Then I ordered powerful machine and started rendering full planet. db_settings My import command was: command. My import output is: output My database log is: log.
As the result, my renderd doesn't serve tiles and this is its output: link
I made import twice and got the same result. What did I do wrong?
asked
31 Aug '17, 10:01
vovakr
11●1●1●3
accept rate:
0%
Okay, I redid the import. Here is a result: output There weren't any errors or anything strange. And again nothing works. What is the possible problem?
This seems to be ok now. Please explain in which way "my renderd doesn't serve tiles". When you try and access a tile, does it immediately return an error message, or does it take a while before an error is returned? Or do you get a white or blue tile only? Does /var/lib/mod_tile contain any .meta files in any of the subdirectories?
The import was not successful. A successful import will end with a line such as
osm2pgsql took 12,735s overall
But I don't see that in the import output. Can you investigate what else happened at 2017-08-31 00:48:41 UTC (perhaps, the machine running out of memory?).
answered
31 Aug '17, 10:48
SomeoneElse ♦
33.6k●66●350●801
accept rate:
16%
Thank you. My machine is AWS t2.2xlarge (8 cores, 32 GB RAM, 1TB SSD) Could you also recommend some changes to my DB configuration? I will add a swap file. What else can I do to ensure in a successful import?
I can't help with sizing for a planet import, I'm afraid. There are some benchmarks at , but I'd take those with a big pinch of salt - some of them are quite old, when the planet was much smaller.
For completeness (and hopefully useful to someone else) your later import says "Osm2pgsql took 174848s overall", which is just over 2 days elapsed.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
import ×184
renderd ×95
planet_osm ×23
question asked: 31 Aug '17, 10:01
question was seen: 1,207 times
last updated: 04 Sep '17, 21:34
Restarting osm2pqsql full planet import because of too low --cache setting? (import runs for 6 days already)
Import only lakes
osm2pgsql planet import killed
Planet import very slow @ Ways
What OSM is not
Importing PDF JOSM
How do I import map data from a .dwg file to OpenStreetMap?
How to get data for two countries ?
Uploading a GPX from TTGpsLogger doesn't work
How can I import data to OSM from a .csv file?
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/58887/unable-to-render-tiles-after-full-planet-import?sort=active | CC-MAIN-2021-10 | refinedweb | 498 | 83.96 |
Next: The unwind_protect Statement, Previous: The break Statement, Up: Statements [Contents][Index]
The
continue statement, like
break, is used only inside
while,
do-until, or
for loops. It skips over the
rest of the loop body, causing the next cycle around the loop to begin
immediately. Contrast this with
break, which jumps out of the
loop altogether.
Here is an example:
# print elements of a vector of random # integers that are even. # first, create a row vector of 10 random # integers with values between 0 and 100: vec = round (rand (1, 10) * 100); # print what we're interested in: for x = vec if (rem (x, 2) != 0) continue; endif printf ("%d\n", x); endfor
If one of the elements of vec is an odd number, this example skips the print statement for that element, and continues back to the first statement in the loop.
This is not a practical example of the
continue statement, but it
should give you a clear understanding of how it works. Normally, one
would probably write the loop like this:
for x = vec if (rem (x, 2) == 0) printf ("%d\n", x); endif endfor | http://www.gnu.org/software/octave/doc/interpreter/The-continue-Statement.html | CC-MAIN-2014-49 | refinedweb | 189 | 65.05 |
GNOME To Lose Minimize, Maximize Buttons.
Please save your sarcasm (Score:4, Informative)
Re:executive summary of approaches (Score:3, Informative)
Re:Even more reason.... (Score:2, Informative)
Well, let's be clear here, Ubuntu is only replacing Gnome-shell for Unity, not any gnome based applications ( except maybe those that depend on gnome-shell) Also, gnome-panel will be available for users that don't want, or can't run Unity on their machines due to lack of driver support for 3D acceleration..
:)
Re:Gnome always had this problem of bad decisions. (Score:2, Informative)
* Corba
Well, that was maybe not such a good decision. The intention of using Corba was to get a programming language agnostic communication framework for the desktop, and at the time the first experiments with Gnome were made (we're talking about the late '90), Corba was hyped for exactly that purpose. In hindsight Corba was just to complicated and unwieldy to use. Yet the Gnome team learned from their mistake. The shortcomings of Corba in plain view, the need for a more simple and extensible framework led to the creation of D-Bus. And D-Bus is nowadays even empowering KDE, creating a first class bridge between the two frameworks.
* XML
I fail to see how the Gnome devs can be made responsible for the creation of XML. Gnome was started about thee years after the first release of the first XML specification. Also, XML was created by Tim Bray et al., non of which have anything to to with the inception of Gnome.
* GConf
The horrors of the Windows registration lye in its binary format, its inaccessibility, its cryptic structure and keys as well as the fact that it tries to do multiple jobs at once. The only way in which GConf could be compared to the Windows registry is in that they both store configuration settings and that they both work with hierarchical namespaces. But GConf and the Windows registry differ in very important ways; GConf's configuration files are human readable and can be manipulated by the unix command line tool tools. GConf even comes with nice command line tools itself, so scripting is easily achieved. Try to do that with the Windows registry.
* C# and Mono
C# is a Microsoft product, so I fail to see how Gnome ought to be at fault here. Mono also was not created by an active Gnome dev (although Miguel de Icaza is a founding member of Gnome), nor was is a Gnome foundation decision to create Mono. That said, although I tend to avoid Mono applications, it offers a very slick programming environment.
* Umpteen window manager changes, none good enough
Gnome changed window managers from Enlightenment to Sawfish, from Sawfish to Metacity. The next planned step, Mutter, is just a branch of Metacity relying upon the Clutter library. There were very sound reasons for those changes; Enlightenment was its own desktop project and used a different toolkit. Sawfish was written in Scheme, which non of the developers was willing to maintain anymore. As for 'none good enough', that is oppinion.
You fail to see the reasons for dropping those buttons; the Gnome developers are willing to innovate and go beyond the desktop paradigms of the last 25 years. Most of the GUI interaction concepts used today stem from those early years and even many of them provide adequate, some are just plain wrong and even hurtful. We may be using them everyday and thus they seem to be working, but many times we're just working around them. I could name many problems but they mosty center around the theme 'force me to manage my applications instead of letting my do my actual work'. Do you're research and you'll find many topics.
The current drive in the OpenSource community to innovate in desktop paradigms (KDE 4, Unity, gnome-shell) shows that this is a real hurt and I hope that the most obvious flaws of the current GUI metaphors will be addressed soon.
Try to be open and accept that the ways we've been doing things in the last 25 years may not be the final answers to graphical computer interaction. We've gathered a lot of experience, maybe it's a good time to try out some new things.
Re:Gnome always had this problem of bad decisions. (Score:2, Informative)
... from the very beginning.
I lost track of all the "cool" but horrible ideas which made it into gnome.
/---/
/---/
- GConf (the horrors of the windows registry re-implemented by monkeys)
- C# and Mono - embracing Microsoft technology!
GConf [wikipedia.org] have only superficial similarities with the MS Windows Registry. It is more similar to Mac OS X plists (they both use xml-files and deamons that report changes in them, the most important difference is that GConf actually works) and good old fashion unix configuration files. Actually, it is good old fashion unix configuration files, but in xml-format and with a deamon that alert associated application when one of them change. GConf also has an editor to make changes in those files (superficially resembling the Windows registry editor that is used to edit the the MS Windows registry database (one very large file)) and a set of command line tools, if you don't like those tools you can use any text editor you want to change your settings (look in the ~/.gconf directory, it is
... gasp!... full of plain text files).
The Windows registry is a really, really bad idea that has gone far to long. A big and fragile blob of a database that crashes everything once corrupted. To liken gconf with the Windows registry is not fair at all, if you like to compare it with anything, compare it with Mac OS X:s plists, both systems consists of many small separate xml configuration files (only plists are ususally larger and sometimes clash with other configuration files in unexpected ways, it also has (undocumented) deamons that take values from the plists and transfer them to parts of the system that use "classic" unix configuration files, but those parts of OS X become less with time, there used to be many of them but the only one I can think of that is left in 10.6 is CUPS, the time space between a change in a plist and in the other config file used to be a huge source of crashes in Mac OS X). I bet you will find the comparision favorable to GConf.
That said, XML is not very human friendly. They could have picked a simpler to edit/read file format in GConf.
Mono seem like a very bad idea. In my experience, the Mono platform encourage application makers to make really horrible user interfaces and when I have to run a Mono-application, even for the simplest of tasks, my otherwise cool and silent computer is transformed to a very noisy space heater. Most script languages produce applications that run faster and the crash frequency is just horrible.
Re:Gnome always had this problem of bad decisions. (Score:5, Informative) | http://tech.slashdot.org/story/11/03/05/1619223/GNOME-To-Lose-Minimize-Maximize-Buttons?from=rss | CC-MAIN-2014-15 | refinedweb | 1,190 | 67.49 |
Help finding the value of an objc string constant...
Hey all...trying desperately to get this topic through the aggressive spam filter...crazily it seems like the name of the uikit constant I'm asking about is itself being flagged as spam.
Anyway...I'm putting spaces in between each word of the name just to fool the filter:
UI Collection Element Kind Section Header
Trying to find the actual string value of that constant...my usual methods to find a value via google, in both objective-c and swift docs, etc. have all failed. You can see the definition of the constant by googling the name.
Anyone have an idea how to pull the value? I need it to finish my pythonista'd wrapper of a collection view.
Little update...in desperation, I started scrounging around in objc_util.c for the name...and it looks like there actually are both:
UI CollectionElementKind SectionHeader
and
UI CollectionElementKind SectionFooter
symbols defined in there...though they show up as ctypes._FuncPtr objects...attempting to call them without arguments, or with None as the argument, crashes pythonista.
So...anything I can do with those?
@shinyformica said:
UI CollectionElementKind SectionHeader
I fail to find anything useful. Earlier there was a thread where some trick was used to dig some of these values from the framework DLL, see if I can find it.
Just checking: Have you tried just using the
constant name as a string?
@mikael indeed, that was the first thing I tried...unfortunately it didn't work. I actually think I am missing something obvious here. I can find where it is defined in objective-c, but the definition contains no actual string value, and I can find no place where a string value is specified:
NSString *const UI Collection ElementKind SectionHeader;
(as before, remove the spaces in the spammy name of the constant).
@shinyformica, I found this thread.
I do not understand enough of what is happening there, but your constants are included in the list that the code gives.
With:
addr = ctypes.c_void_p.in_dll(objc_util.c, 'UICollectionElementKindSectionFooter')
I get a pointer to something, but nothing legible.
@shinyformica, that was just a long way to say that we obviously need @dgelessus and @JonB for this, even if you were too polite to say it.
A step more 😀
from objc_util import * x = c.UICollectionElementKindSectionFooter
@mikael thank you, checking out that thread now...and yes I was secretly hoping one of the objc gurus on here would know what I was missing. I think what's happening here is that Apple is actually using the address of the NSString* and not any actual string value.
@cvp are you saying that I can just directly use the value I pull from objc_util.c.UI CollectionElementKindSectionHeader in the objc method call? With no translation?
.)
- shinyformica
@dgelessus thanks so much for the deep knowledge!
So now this whole thing is working! As it turns out a lot of this was a red herring.
In summary:
- It is perfectly legal to use the python string value "UICollectionElementKindSectionHeader" directly in the
registerClass_forSupplementaryViewOfKind_withReuseIdentifier_()objc method, since it is converted to an NSString object implicitly.
- The value returned by
ctypes.c_void_p.in_dll(objc_util.c, "UICollectionElementKindSectionHeader")works as well, I assume since that c_void_p ends up being interpreted as an NSString object.
- Of course,
objc_util.ObjCInstance(ctypes.c_void_p.in_dll(objc_util.c, "UICollectionElementKindSectionHeader"))works, and is probably "safest" since you get the actual value without having to guess, and it is turned into a correct type.
- You cannot directly use the _FuncPtr value returned by
objc_util.c.UICollectionElementKindSectionHeaderbut there might be some way to convert that _FuncPtr value (just doing a
ctypes.cast(funcptr, ctypes.c_void_p)does not work).
Now, the reason the whole thing was a red herring: collection views won't even bother to call the
collectionView_viewForSupplementaryElementOfKind_atIndexPath_()method unless the flow layout object being used to layout the items has had its
headerReferenceSizeproperty set to something other than the default
CGSize(0,0)...so even when I was successfully registering the class with the method to create section headers, it wasn't being called.
@shinyformica sorry to answer so late, but I wrote my post in my bed 😀 and, anyway, I don't know anything about ObjectiveC. I think only gurus mentioned by @mikael could give you an explanation. This forum is marvelous, isn'it?
@shinyformica, what’s up with your UICollectionView wrapper? Anything I could try?
- shinyformica
@m. | https://forum.omz-software.com/topic/6077/help-finding-the-value-of-an-objc-string-constant/? | CC-MAIN-2022-33 | refinedweb | 739 | 58.08 |
Introduction
C was the first programming language that I learned in systematic ways, thanks to my college. At that time I used to think about where I am going to use this language but in books, it wasn't clean like water, just a paragraph about it.
I see C is not a very popular language among beginners and even professionals, so today we are going to look into the use of the C language in the industry.
Operating System
Program scripted in C language gets executed with speed as compared to other assembly languages. C is primarily used in developing various operating systems.
The operating system using C language
- Linux
- Unix Kernel
- Microsoft Windows Utilities
- Android Operating System
- Many other Modern OS
Development of New Language
C, directly and indirectly, helps in the development of many other programming languages due to its simplicity and efficiency in the execution of code.
The language developed using C
- C++ (C with classes)
- D
- Java
- Javascript
- Perl
- Python
- And many more
Top 7 Programming Books Recommended By Programmers
Suraj Vishwakarma ・ Jan 4 ・ 4 min read
Embedded System
C language is used in an embedded system because of direct access to the hardware level, presence of C compiler, dynamic memory allocation. It plays a key role in performing a specific function by the processor.
Embedded System using C language
- MP3 Player
- Video game console
- Digital Camera
- Much More
Graphics and Games
C language heavily involved in game development and contribute to the development of various games.
C language can be used by beginners to learn game development.
Games Developed using C language
- Old games like
- Doom
- Quake I, II, and III
- Return to Castle Wolfenstein
- Modern games like
- Chess
- Bouncing Ball
- Archery
CSS Tips To Start Thinking Responsive Website
Suraj Vishwakarma ・ Oct 8 '20 ・ 2 min read
Computation Platforms
Algorithm and Data structure are swiftly managed by C programming language and it helps in faster computation. This helped C to be used for a higher degree of calculation.
Uses of C in Computation Platforms
- MATLAB
- Mathematica
Discuss
- Is C the most underrated programming language? and Why?
- If not then which is the most underrated programming language and Why?
Last Note
Thanks for reading the blog post.
Discussion (37)
Props to Dennis Ritchie for being light years ahead of his time 🧙♂️✨
If I remembered correctly, he was unfortunate to die the same day as Steve Jobs.
Within the week, RIP both 🙏
Two legends ❤️ RIP
Yess he was that made this language very useful
C was great at the time of invention. It's still OK, but definitely not underrated. Probably the most underrated language by now is Rust. Rust basically puts resource management (not just memory) into the hearth of language and does this in style and with nearly zero performance impact.
The most underrated language(family) will always be Lisp
It's among the oldest programming language that we know. Very few people today used this language and don't know anyone using it.
Clojure is fairly popular.
It's more of a modern languages than other member of the group
Look like Rust need to be know to other for resource management.
There are a lot of articles about memory management in Rust. This one is quite good at explaining the advantages of Rust resource management model at mode general level (i.e. for other resources, not just memory).
Sure I will check that to learn more about it
I will recommend instead of starting with c. A newbie can start with c++ . because c is the preprocessor of c++ and therefore all the features of c are included in c++ and the plus point of c++ is that you can learn the concept of oops also which is not there in c.
Btw informative article . respect to your hardwork. 👍 😊
You can implement oop in C, just using a struct and functions which accept a pointer to said struct. You don't get private members or attributes but the same is true of Python
Implementing oop is not only limited for struct. There are other concept like inheritance,polymorphism, Encapsulation , constructors are also there which we cannot implement in c.
Okay that's something interesting and it will make C more useful.
There's objective-c which is a superset of C.
Okay I have to try it, I wasn't knowing this. Thanks 👍
It's been usperseded by Swift as it was invented to develop MacOS applications int he '90s.
Thanks for the information and it will help many to understand it.
I think C is very underrated! But I'm highly biased since I work with Embedded systems and C is my main language 😆. Really though, I think it's underrated in terms of the number of people who would consider C as their first language. Or even in terms of people who eventually want to pick it up and develop low level programs with C.
Yes in such cases C is very underrated and very few people today talk about and recognise as a language that make a way for other Languages.
I personally think C is a brilliant language and is beginner-friendly for anyone who is new to programming. Yeah, it's true that C is a bit old language and nowadays we have languages which can do a lot more than C but I really don't think that C is underrated in any form. C is also the most popular language of 2019.
Well, there are many forgotten languages which even I have forgot... xD
Yeah, C is great for beginners and has concept that are used my many Language today. But in mainstream, people don't talk about C.
I highly recommend everyone to read the book created for this language. The C Programming Language is one of the most enjoyable programming reads I have found. Every time I think about writing a book I wish I can write as good as Dennis Ritchie. I can remember the first time I read that book on college, it was my awakening to professional programming, I realized I was doing all wrong.
Nice we can give it a try 🔥🔥
It is possible, and probably also encouraged, to write C language along side Golang. Not sure about C++.
Also in Golang, you can choose which part of C (cgo) to compile.
That's great 🔥🔥🔥
why Golang?
Python and Node.js also have C binding, but I haven't seen anywhere else that I am given to compile custom parts of SQLite myself. (All flags unchecked by default.)
But I do also see that people do compile COLLATION in Rust SQLite.
I also have written cgo in Webview. Golang just made it convenient, with declarations in comments above
import "C".
I don't have enough experience in R, LaTeX or Lua to say anything about them.
Thanks you for the peice of information 🔥🔥🔥
C has its niche, but for most mainstream projects, it is not worth to sacrifice development speed without seeing any substantial benefits.
I read on Hacker News the other day that Rust is becoming faster overall in benchmarks than C.
Rust is becoming popular and powerful
I think all JS Engines are C++ but anyhow good article.
And C++ is directed influenced by C and thanks for appreciation | https://practicaldev-herokuapp-com.global.ssl.fastly.net/surajondev/is-c-most-underrated-programming-language-1bhn | CC-MAIN-2021-21 | refinedweb | 1,222 | 62.58 |
Only public functions defined in public classes will be indexed.Starting with this release, the SDK will only look for public functions defined in public classes. If you have non-public functions or they are defined in a non-public class, the functions will not be indexed and the SDK will not be able to call those functions. The following is a valid function definition:
public class Functions { public static void ProcessQueue([QueueTrigger("input")] string input) { } }The following will not work because the containing class is not public:
class Functions { public static void ProcessQueue([QueueTrigger("input")] string input) { } }
Parallel execution with QueuesSupport for async functions was added in the 0.4.0-beta release of the SDK. As part of that release the SDK added parallelism where functions listening on different queues could be triggered in parallel. In this release, we added support for fetching messages for a queue in parallel within a QueueTrigger. This means if a function is listening on a queue as shown below, we will get a batch of 16 (default) queue messages in parallel for this queue. The function is also executed in parallel.
public class Program { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void ProcessQueue([QueueTrigger("input")] string input) { } }The following code will add the same functionality that was there in 0.4.0-beta. You can configure the batch size with the JobHostConfiguration class as follows:
public class Program { static void Main() { JobHostConfiguration config = new JobHostConfiguration(); config.Queues.BatchSize= 1; JobHost host = new JobHost(config); host.RunAndBlock(); } }
BlobTriggers will be processed only onceIn the previous releases BlobTriggers were always reprocessed until a newer Blob output existed. This meant that in some cases blobs would be reprocessed. This release of the SDK ensures that BlobTrigger will only be processed when new blobs are detected or existing blobs are updated. The following code shows how you can trigger a function when a blob is created or updated.
public class Program { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void ProcessBlob([BlobTrigger("test/{name}")] string input) { } }These changes also allow to start a blob to queue based workflow when a BlobTrigger is processed. The following code shows how to write a queue message when a BlobTrigger is processed.
public class Program { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void BlobToQueue( [BlobTrigger("test/{name}")] string input,string name, [Queue("newblob")] out string message) { message = name; } }The SDK will add a container called “azure-webjobs-hosts” to your Azure storage account (specified by AzureWebJobsStorage) where the SDK maintains a blob receipt for each blob that it has processed and it uses this container to keep processing status of each blob. The blob receipt has the following information for a particular Blob that was processed:
- What function was triggered for this Blob (FunctionId)
- container name
- blob type
- blob name
- ETag – version of the blob.
Retry and Error handling for BlobsThis. The queue message contains the following information as a JSON serialized string:
- FunctionId – Id of the function for which the blob was processed.
- BlobType – Type of Blob eg PageBlob or BlockBlob
- ContainerName of the blob
- BlobName
- ETag – version of the blob that caused a failure.
public class Program { static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void ProcessBlob([BlobTrigger("test/{name}")] string input) { throw new Exception(); } public static void BlobErrorHandler( [QueueTrigger("webjobs-blobtrigger-poison")] BlobTriggerErrorMessage message) { } public class BlobTriggerErrorMessage { public string FunctionId { get; set; } public string BlobType { get; set; } public string ContainerName { get; set; } public string BlobName { get; set; } public string ETag { get; set; } } }This sample also shows you how you can strongly type the queue message to a class called BlobTriggerPosionMessage since the message is a JSON-serialized string and the SDK allows you to bind a JSON serialized object to a POCO (Plain Old CLR Object). The following code how you can configure the retry count for processing blobs. This is the same configuration object used to handle poison messages for a queue. This means this setting controls the retry count for retrying functions which are processing either blobs or queues.
public class Program { static void Main() { JobHostConfiguration config = new JobHostConfiguration(); config.Queues.MaxDequeueCount = 2; JobHost host = new JobHost(config); host.RunAndBlock(); } } | https://azure.microsoft.com/pt-pt/blog/announcing-the-0-5-0-beta-preview-of-microsoft-azure-webjobs-sdk/ | CC-MAIN-2017-04 | refinedweb | 709 | 50.97 |
anyattribute ##other?
Discussion in 'XML' started by thomas smith, Aug 12, 2004.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Which should be the first? Windows 2000 update (sp4 and all other) and install VS.NetCM, Oct 2, 2003, in forum: ASP .Net
- Replies:
- 1
- Views:
- 530
- Ed Kaim [MSFT]
- Oct 14, 2003
- Replies:
- 0
- Views:
- 551
How to "force" my object never been changed by other other objectsusing it?Chris Dollin, Feb 8, 2007, in forum: Java
- Replies:
- 8
- Views:
- 391
- Eric Sosman
- Feb 8, 2007
Schema: anyAttribute for several namespaces, Oct 29, 2007, in forum: XML
- Replies:
- 3
- Views:
- 579
Month, Day etc. names in other languages other than EnglishDiego, Aug 15, 2008, in forum: Ruby
- Replies:
- 2
- Views:
- 198
- Stefan Rusterholz
- Aug 16, 2008 | http://www.thecodingforums.com/threads/anyattribute-other.167653/ | CC-MAIN-2015-22 | refinedweb | 165 | 80.51 |
What a long strange trip it's been over the late spring and early summer in 2011. Thankfully, everything is up and going in the world of MonoTouch. This is the latest article in what is a series of articles on iPhone development for .NET/C# developers using MonoTouch. To get up to speed on MonoTouch, check out my first article from the April 2011 issue of DevProConnections, "Guide to Building iOS Applications with MonoTouch for the .NET/C# Developer."
Article Overview
Data is what makes applications go. It could be a Twitter search, a running game score where you are playing against your friends, sales data, or any other type of data that users want to base decisions on. In this article, we're going to look at presenting tabular data to users in a UITableView. The UITableView has a number of visually attractive default styles that you can use. After we're done looking at these, we'll look at creating a custom UITableView layout. Along the journey, we'll look at some optimizations we can do that will give the user an improved experience. After we're done with this, we'll look at some strategies to get at various data sources, such as Representational State Transfer (REST), Windows Communication Foundation (WCF), SQL Server, and the on-board SQLite database.
Limitations with Data Apps in iOS
I would be remiss if I didn't mention some of the problems you may run into when using data in the iPhone. These data limitations have very little to do with MonoTouch and much more to do with the general limitations of mobile as well as security issues that you need to be aware of.
First off, the iPhone has a watchdog timer. The timer is "watching" your application. If the timer thinks that your application has locked up in any way, such as locking the UI thread for too long, iOS will kill your application. What does this mean? If your application makes a network request that takes a long time, your application has a good possibility of being killed by the system. This can happen if you make this web request on the UI thread. You want to make requests on a separate thread.
Second, if you make even small requests on the UI thread, you are making a request that can result in "jerky" responsiveness to the user. Your users won't like this. Third, using WCF services in MonoTouch is different from just adding a reference in MonoDevelop to a service and calling methods on the proxy. iOS limits an application's ability to dynamically create code at runtime. As a result, calling WCF is slightly different in iOS. We'll walk through what you have to do there. Finally, when we get to the section on working with SQL Server, I am in no way, shape, or form suggesting that you open up your database server and its associated ports to the public Internet.
UITableView
The UITableView is the basis for presenting data to users. This data can come from any source. As long as the data is readable, we can display it to the user, as the example in Figure 1 shows. First off, let's do the basics. We'll open up the .xib file in a basic project. Then we'll drag a UISearchBar and a UITableView into our window design surface. In my example, I've created a couple of outlets. The UITableView is searchTable, and the UISearchBar is searchTerm. Now that you have created these, save and close XCode.
At a high level, let's look at my code for wiring this up in my controller class, shown in Figure 2.
public override void ViewDidLoad () { base.ViewDidLoad (); ts = new TwitterSearch(); searchTerm.SearchButtonClicked += HandleSearchTermSearchButtonClicked; //any additional setup after loading the view, typically from a nib. } void HandleSearchTermSearchButtonClicked (object sender, EventArgs e) { var TermToSearchOn = searchTerm.Text; searchTerm.ResignFirstResponder(); ts.StartSearch(TermToSearchOn, new AsyncCallback(ProcessResult)); } public override bool ShouldAutorotateToInterfaceOrientation (UIInterfaceOrientation toInterfaceOrientation) { // Return true for supported orientations return (toInterfaceOrientation != UIInterfaceOrientation.PortraitUpsideDown); } void ProcessResult(IAsyncResult iar){ List
twtL = ts.ProcessRestXmlLINQHttpResponse(iar); var td = new TweetListData(twtL); InvokeOnMainThread(delegate{ searchTable.DataSource = td; searchTable.ReloadData(); } ); }
With this code, we've created a new TwitterSearch object and will be using it to perform searches against Twitter. The search object will perform the calls asynchronously.
When the user clicks on the search bar to perform an actual search, the StartSearch method is called. Inside the StartSearch method, the call to the Twitter Search API is made asynchronously. The code inside my search class looks like that in Figure 3.
public class TwitterSearch { private string TwitterUrl = "{0}&rpp=100"; public TwitterSearch () { } public void StartSearch(string term, AsyncCallback iac) { string Url = String.Format(TwitterUrl, term); try { // Create the web request HttpWebRequest request = WebRequest.Create(Url) as HttpWebRequest; // Set type to POST request.Method = "GET"; request.ContentType = "application/xml"; request.BeginGetResponse(iac, request); } catch { //do something throw; } } public List
ProcessRestXmlLINQHttpResponse(IAsyncResult iar) { List twt; try { HttpWebRequest request = (HttpWebRequest)iar.AsyncState; HttpWebResponse response; response = (HttpWebResponse)request.EndGetResponse(iar); System.IO.StreamReader strm = new System.IO.StreamReader( response.GetResponseStream()); //string responseString = strm.ReadToEnd(); System.Xml.Linq.XDocument xd = XDocument.Load(strm); XNamespace atomNS = ""; twt = (from tweet in xd.Descendants(atomNS + "entry") where tweet != null select new Tweet { StatusDate = tweet.Element(atomNS + "updated").Value, Status = tweet.Element(atomNS + "title").Value, ProfileImage = tweet.Elements(atomNS + "link").ElementAt(1).Attribute("href").Value, UserName = tweet.Element(atomNS + "author").Element(atomNS + "name").Value }).ToList (); } catch { //do something throw; } return(twt); } }
Once the data is returned asynchronously, we then call ProcessRestXMLLinqHttpResponse, which will process it using LINQ to XML to create the necessary objects. Both of these methods are used to make the call and process the results and should look familiar to .NET/C# developers.
The next high-level step in the process is to create a data source object. On my searchTable object, I will set the DataSource property and then call the .ReloadData() method. For ASP.NET Web Forms developers, this is very familiar. We do the same thing when binding data to a GridView, where we set the .DataSource property and call .BindData() on the GridView.
Now, let's jump into the iPhone-isms. The first obvious one is InvokeOnMainThread(delegate). We have to remember that we are running on a non-UI thread when we are making asynchronous calls. If we want to write to a UI element, we need to do it from the UI thread. This is done via the InvokeOnMainThread method. If we don't use InvokeOnMainThread to write to the UI, we will either get an error, or we won't get anything to happen. Note: If you are using threading in your MonoTouch application and nothing happens, you probably need an InvokeOnMainThread method call somewhere in your code.
Let's dig into our data object that we are going to bind against. Figure 4 shows the object that I've created.
public class TweetListData : UITableViewDataSource { private List
_data; #region implemented abstract members of MonoTouch.UIKit.UITableViewDataSource public override int RowsInSection (UITableView tableView, int section) { return(_data.Count); } public override UITableViewCell GetCell (UITableView tableView, MonoTouch.Foundation.NSIndexPath indexPath) { // TODO: Implement - see: string cellid = "cellid"; UITableViewCell cell = tableView.DequeueReusableCell(cellid); if ( cell == null ){ cell = new UITableViewCell(UITableViewCellStyle.Subtitle, cellid); } cell.TextLabel.Text = _data[indexPath.Row].Status; } return(cell); } #endregion public TweetListData (List data) { _data = data; } }
The result of running this code looks something like the screen shown in Figure 5.
Built-in Styles
There are four built-in styles available through iOS:
- .Default: This style provides a cell with a left-aligned and black color label and an optional image view.
- .Value1: This style provides a left label with a left-aligned and black color label and a right-aligned label with a blue color label.
- .Value2: This style provides a left blue color label with text that is right aligned and a left-aligned label with a black color label.
- .Subtitle: This style provides a left-aligned label on the top that has the black color and a left-aligned label on the bottom that has the gray color.
Let's look at some new code for the GetCell method, shown in Figure 6.
public override UITableViewCell GetCell (UITableView tableView, MonoTouch.Foundation.NSIndexPath indexPath) { string cellid = "cellid"; UITableViewCell cell = tableView.DequeueReusableCell(cellid); if ( cell == null ){ cell = new UITableViewCell(UITableViewCellStyle.Subtitle, cellid); } cell.TextLabel.Text = _data[indexPath.Row].Status; cell.DetailTextLabel.Text = _data[indexPath.Row].UserName; var img = Convert.ToString(_data[indexPath.Row].ProfileImage); NSUrl nsUrl = new NSUrl(img); NSData data = NSData.FromUrl(nsUrl); if ( data != null ) { cell.ImageView.Image = new UIImage(data); cell.ImageView.SizeToFit(); } return(cell); }
When you run the application, you get the Twitter profile image, the Tweet, and the Twitter user ID. (Click the Download button at the top of this article to download the complete code for the Twitter search iOS application discussed in this article.) As you scroll through the data, you will probably notice that the scrolling is jerky. Requests to load the image are continuing in the background. Ultimately, it's not really a smooth experience for the users. These requests are happening on the main UI thread. Here are a few ways that I have thought of to improve the user's experience of scrolling through the data.
- The first, and simpler way, is to implement some caching. This can be done via a dictionary object. Unfortunately, the downside to this is that the initial download is still done on the UI thread. However, subsequent requests are satisfied via the dictionary. This could be helpful since ultimately, the requests are still done one at a time and on the main thread.
- After download of the tweets occurs, we loop through each tweet and download the image needed. Each download is done via a .NET 4 Task. (Yes, Virginia, you can use many of your favorite .NET features in MonoTouch.) Doing so is easy and simple, however, your code will still have to track the images, which is a platform-specific feature.
- Another option for downloading the images is to use .NET threads to pull the images down on a non-UI thread. Performing this task on a non-UI thread removes the jerkiness in scrolling. We'll look at a threading solution in the section on custom cells and in the section on creating a custom UITableViewCell.
- Finally, I'm sure that there are many other options that you can use.
Custom Cells
All of this sounds so good. Unfortunately, what happens when the defaults just don't fit what you want to do? Thankfully, you can create your own custom look. Let's look at the steps to do this.
First, let's create a new iPhone View with a controller. Create a new file by navigating to MonoDevelop, iPhone, iPhone View and Controller. Give this a file name and select New. In our example, the file is called MyCustomCellWithController, as shown in Figure 7.
Once you click the New button, you should see something similar to the image shown in Figure 8.
You now have an .xib file that contains the UI layout, the .cs file that contains the location where we can implement some custom code to modify the UI elements, and finally, the .xib.designer.cs file.
The next step is to open our .xib file by double-clicking it, which opens the file in Xcode. Once in Xcode, you will want to drag the UITableViewCell from the library to the window titled View, as shown in Figure 9.
I suggest you make the window area an appropriate height, so that we can easily add controls to our view. In our example, we'll add a UIImageView and UILabel. Finally, we'll create and set up outlets for the cell, the UIImageView, and the UILabel. This is performed by creating a custom UITableViewCell and manually creating your own getters and setters.
Your UITableView probably looks something like the image in Figure 10.
Now that you have this, you can see that we're only getting one line of content and notice that the scrolling is a little bit jerky. The problem is that we're currently downloading an image, which can be fairly large for a mobile Internet connection, on the main thread. Let's look at how we can resolve these issues.
The display of the content can be resolved via modifying our getters and setters, as shown in Figure 11..Text = value; output.Lines = 0; output.LineBreakMode = UILineBreakMode.WordWrap; output.Font = UIFont.FromName("Helvetica", 12); output.SizeToFit(); } } public UITableViewCell Cell { get { return(cell); } }
The two things to note in this code are:
- A URL string is passed in via the TwitterImage setter. When the string is passed in, a ThreadPool thread is created, and work will begin on downloading the image. Moving this off the UI thread eliminates the jerkiness in scrolling the UITableView. This can be seen in the MyCustomCellWithController.cs file. To make this process a little more efficient, we could cache the content, but that is something I'll leave for you to work on.
- In iOS 4 and earlier, the UILabel doesn't wrap display content automagically in a way that most .NET developers are familiar with. Because it doesn't wrap content or expand to take up the necessary space, we need to handle this on our own. Once the value is set, the lines and wrap properties are set, as well as a call to SizeToFit().
The values are set via getters and setters within our partial class, shown in Figure 12.
public partial class MyCustomCellWithController : UIViewController { public MyCustomCellWithController () : base ("MyCustomCellWithController", null) { } public override void DidReceiveMemoryWarning () { // Releases the view if it doesn't have a superview. base.DidReceiveMemoryWarning (); // Release any cached data, images, etc that aren't in use. } public override void ViewDidLoad () { base.ViewDidLoad (); //any additional setup after loading the view, typically from a nib. } public override void ViewDidUnload () { base.ViewDidUnload (); // Release any retained subviews of the main view. // e.g. myOutlet = null; } public override bool ShouldAutorotateToInterfaceOrientation (UIInterfaceOrientation toInterfaceOrientation) { // Return true for supported orientations return (toInterfaceOrientation != UIInterfaceOrientation.PortraitUpsideDown); }.Font = UIFont.FromName("Helvetica", 12); output.Text = value; output.Lines = 0; output.LineBreakMode = UILineBreakMode.WordWrap; output.SizeToFit(); } } public UITableViewCell Cell { get { return(cell); } } }
This partial class allows us to access the elements within the MyCustomCellWithController in a safe way. The .xib.cs file contains our getters and setters for the class as well as initialization routines.
Delegates
Often, there is a need to change the default behavior of an object in iOS. In this situation, we need to change the default behavior of the UITableView. Specifically we'll want to change the height of the row so that the content fits within it, and we'll want to do something when a cell is selected. To do this, we'll create a delegate object, as shown in the code in Figure 13.
public class TweetViewTableController : UITableViewDelegate { private UIFont font = null; private List
_twts = null; public List TweetList { get{ return(_twts); } set{ _twts = value; } } public TweetViewTableController() { if (font == null) font = UIFont.FromName("Helvetica", 12.0f); } public override void RowSelected (UITableView tableView, NSIndexPath indexPath) { // TODO: Implement - see: Console.WriteLine(String.Format("Row clicked: {0}", indexPath.Row)); } public override float GetHeightForRow(UITableView tableView, NSIndexPath indexPath) { var TotSize = (TweetListData._data[indexPath.Row].Status.Length / 18 + 1) * font.LineHeight; var maxHeight = 120f; if ( TotSize < maxHeight ) { TotSize = maxHeight; } return(TotSize); } }
In this case, we'll call it TweetViewTableDelegate. Our object will inherit from the UITableViewDelegate, and then we will override the RowSelected and GetHeightForRow methods. These methods will be used when a user clicks on a cell (RowSelected) and when the height of the row needs to be determined (GetHeightForRow).
Other methods could be overridden as well. You can determine what types of functionality you want to override by digging into the available methods -- simply search on "public override" to see some of the available methods.
Retrieving Data
Now that we've seen what we can do with data, the next logical question to answer is where to get data. In .NET on the desktop, accomplishing this is fairly easy. We would go and fire up ADO.NET and get some data via LINQ, Entity Framework, DataTables, NHibernate, or any other type of tool that allows us to query data and return it to the client in some way.
Unfortunately, the mobile world has some constraints that don't exist in the desktop world. Mobile connections are slower than a typical wired connection. In my house, my wired connection via a cable modem is multiple times faster (10 times) than a mobile 3G connection. Even when I am in a location that supports the Sprint 4G network, my cable modem in my house is still about three times faster.
Mobile networks are typically less reliable than wired networks. When I walk through the locker room in my gym, I get zero bars of connectivity. When I walk out of the locker room, the connection immediately goes up to four to five bars of connectivity. Finally, the latency of a wireless network is higher. Latency is the amount of time that is required from the initial request of the device until the initial response is returned back to the device. For a wired connection, a good amount of latency is on the order of 50 ms. For a mobile device, latency could be on the order of thousands of milliseconds. In your mind, this may not sound like much, but in reality this can be significant. It can seem like your TCP/IP packets have to travel from your device, to the moon, off to their destination, and then back.
Another important constraint is the watchdog timer. iOS implements a watchdog timer. If the main UI thread is locked waiting for something to happen, and the watchdog timer hits its timeout point, the watchdog will kill your app.
As a result of these issues, we will want to use asynchronous requests whenever we go off the device as much as possible. Note: Since our web service calls will be going over the public Internet, you should look at using HTTPS for as many of your web service calls as possible.
Using WCF
Now that we understand the constraints that we have with mobile devices, how do we get at data using WCF? .NET developers are familiar with the concepts and how to use WCF in a .NET 4 application running on Windows.
Unfortunately, iOS doesn't allow us to use WCF web services as we would expect. In a Visual Studio-based application, we can create a reference to a WCF web service and Visual Studio will handle setting up the necessary proxies. So, the question becomes how do we get the proxies necessary to make these calls? Thankfully, we can create these on our own. Let's look at the steps to use a WCF service with MonoTouch:
- We need to manually create a proxy file. This is done via a command-line utility from the Silverlight SDK. The command is this:
SlSvcUtil.exe /noConfig
This will create a .cs file that can then be used inside of a MonoTouch application.
- Import the .cs file into your application in MonoDevelop on the Mac.
- Add references to System.Runtime.Serialization, System.ServiceModel and System.ServiceModel.Web.
- You'll need to add a using statement for System.ServiceModel.Web.
Now that you have completed this, you'll be able to program against the service. To program against the service, your code will look something like that in Figure 14.
voidHandleBtnWCFAsyncTouchUpInside (objectsender, EventArgs e) { AddNumberServiceClient asc = newAddNumberServiceClient( newBasicHttpBinding (), newEndpointAddress ("") ); asc.AddNumbersCompleted += HandleAscAddNumbersCompleted; asc.AddNumbersAsync(3, 4); } voidHandleAscAddNumbersCompleted (objectsender, AddNumbersCompletedEventArgs e) { InvokeOnMainThread( delegate{ lblOutput.Text = "Result: "+ e.Result.TotalNum.ToString(); } ); }
As you can probably guess from looking at my code, I have a web service that will take two numeric inputs, add them together, and return the result. As you look at the code, everything makes some amount of sense. A method is assigned for the .AddNumbersCompleted event, and then the method AddNumbersAsync is called with the two numbers. When the value is returned, the one thing that looks a little bit strange is the InvokeOnMainThread. When we make an asynchronous query, the result is returned on a non-UI thread. We can only write on a UI control from the UI thread, so we use InvokeOnMainThread to have some code execute on the main thread.
Great, now we know how to use WCF. Unfortunately, the support for WCF in MonoTouch is listed as experimental. If your WCF calls are used in conjunction with HTTPS, you're probably fine. However, don't expect all features of WCF to work perfectly in every possible configuration. WCF is just too complicated to be able to easily implement support in a non-Windows mobile environment without access to a lot of the source code for WCF and rather involved understanding of the internals of WCF.
REST
REST is another way to handle communication with web services. A discussion of what REST is, and its concepts, is beyond the scope of an article about handling data in mobile apps. If you are looking to get up to speed on REST, check out the Representational state transfer Wikipedia page, which is a good starting point to learn more about REST. For us as developers, we'll need to understand a few concepts to get started with.
First, REST is based on using the HTTP verbs for its action. For example, querying data is typically associated with GET. Adding data typically is associated with a POST. Deleting an object is typically associated with DELETE. And so on. I will deviate from this in my examples. I'll use a POST for most of my operations.
With REST, there is no proxy support, so developers will have to know all of the data that goes in and all of the data that comes out.
There are several data formats that you'll want to be familiar with. These are eXtensible Markup Language (XML) and JavaScript Object Notation (JSON). As a .NET developer, you are probably familiar with XML, but most likely less familiar with JSON. For more info about JSON, you can start at the JSON Wikipedia page.
Now, let's look at some code to call some REST-based web services. This first web service is a call to the Twitter API to receive some data. In the example in Figure 15, we'll make a call to get some JSON data back. As you look at the code, you can see that this code is exactly the same code that you would see if you were calling from a .NET 4 desktop application.
void HandleBtnRESTJSONJsonLINQHttpResponse), request); } catch (WebException we) { Console.WriteLine(String.Format("Web exception: {0}", we.Message)); } } void ProcessRestJsonLIN.Json.JsonArray ja = (JsonArray)JsonArray.Load(strm); var twt = (fromx inja select newTweet { StatusId = x["id"].ToString(), UserName = x["user"]["screen_name"].ToString(), ProfileImage = x["user"]["profile_image_url"].ToString(), Status = x["text"].ToString(), StatusDate = x["created_at"].ToString() }).ToList
(); InvokeOnMainThread(delegate{ lblOutput.Text = String.Format("Records Returned: {0}", twt.Count); } ); }
When you look at the callback, you are probably seeing something that looks a little bit different. The System.Json namespace is something that is actually not in the .NET 4 client profile. Thankfully, Microsoft has included this in the Silverlight profile, therefore it's an entirely valid namespace that .NET developers can use. In our code, we're using a JSON array to hold the contents that are returned to us.
The next step is to create a set of objects. This is done using a LINQ query. Given the popularity of JSON and the ability to use LINQ, this is a great thing. As an fyi, we could just as easily have iterated through the JSON array, but LINQ is so much cooler and more fun to work with. Yes, Virginia, we have LINQ in MonoTouch.
Now that we have looked at JSON support, let's take a look at using XML. You can see in Figure 16 that once again, we are calling a REST API at Twitter, and we're pulling data back.
void HandleBtnRESTXmlLINQHttpResponse), request); } catch (WebException we) { Console.WriteLine(String.Format("Web exception: {0}", we.Message)); } } void ProcessRestXmlLIN.Xml.Linq.XDocument xd = XDocument.Load(strm); var twt = (fromx inxd.Root.Descendants("status") where x != null select newTweet { StatusId = x.Element("id").Value, UserName = x.Element("user").Element("screen_name").Value, ProfileImage = x.Element("user").Element("profile_image_url").Value, Status = x.Element("text").Value, StatusDate = x.Element("created_at").Value }).ToList
(); InvokeOnMainThread(delegate{ lblOutput.Text = String.Format("Records Returned: {0}", twt.Count); } ); }
In the callback, you can see that we're using LINQ to XML and are hydrating our objects. The last thing to note is that I am going ahead and forcing the query to run via the call to .ToList. This isn't strictly required, but done merely for my benefit.
SQLite
Handling data over a web service to a cloud data store is a great thing. Unfortunately, it isn't always the best thing when dealing with data in a mobile device. Doing too much communication over an antenna will hit the battery on a device too much. Also, what happens if you have a lot of storms in your area and lose the cellular network for hours or days? We need a way to store data on the device, and an application will upload that data at another time.
Thankfully in this regard, Apple has included the SQLite relational database with iOS. SQLite is an embedded relational database. It allows a program to store data locally and then send that data to the great data store in the sky. Since we're all familiar with ADO.NET, the API story for SQLite is basically the same as using any other database with .NET. There is a Mono.Data.Sqlite namespace that contains all the ADO.NET objects that we're familiar with, such as a Connection, Command, Parameter, DataAdapter, and similar objects. Let's take a look at some workflow used in the code example in Figure 17 to connect to create and use SQLite.
string dir = Environment.GetFolderPath(System.Environment.SpecialFolder.Personal); string dbFile = "Test.db3"; string db = Path.Combine(dir, dbFile); string dbConn = String.Format("Data Source={0}", db); SqliteConnection conn = newSqliteConnection(); SqliteCommand cmd = newSqliteCommand(); if ( !File.Exists(db) ) { SqliteConnection.CreateFile(db); } conn.ConnectionString = dbConn; cmd.Connection = conn; conn.Open(); string[] sql = newstring[]{ "CREATE TABLE IF NOT EXISTS PEOPLETABLE(PID INTEGER PRIMARY KEY, FIRSTNAME VARCHAR(25), LASTNAME VARCHAR(25) )", String.Format("INSERT INTO PEOPLETABLE (FIRSTNAME, LASTNAME) VALUES ('{0}', '{1}')", "WALLY", "MCCLURE") }; foreach(strings insql) { cmd.CommandText = s; cmd.ExecuteNonQuery(); } SqliteCommand sqlCm = newSqliteCommand(conn); string sSql = "select * from PEOPLETABLE"; sqlCm.CommandText = sSql; sqlCm.CommandType = CommandType.Text; SqliteDataAdapter sda = newSqliteDataAdapter(sqlCm); DataSet ds = newDataSet(); sda.Fill(ds, "PEOPLETABLE"); lblOutput.Text = String.Format("Records returned: {0}", ds.Tables["PEOPLETABLE"].Rows.Count); if ( conn.State != ConnectionState.Closed ) { conn.Close(); } sqlCm.Dispose(); sda.Dispose(); conn.Dispose();
First, we need to get the directory to the folder that we'll use to store our database file. To do so, combine the directory name and filename to get the full path to the database. We'll use that full path as the parameter to the database-connection string.
Next, we'll check whether the file exists. If not, we'll create it using a static method on the SqliteConnection object. Finally, we'll pass some commands into our connection to set up our database. We could also send the database with our application, but since the application will need to send update commands to the database, I have shown how to set up the database.
Alhough this example only pulls the data from the table, you can use parameters to customize your statements. You can also do all the rest of your CRUD operations. A word of warning: Don't pull down 500KB records to your local database. Although you can do so, your application probably won't perform well because of this, and your users will probably not like you.
A few final thoughts on this. Your database is joined at the hip with your application. If you delete your application, iOS will delete your database as well. Updates don't seem to delete your database, but a delete and a reinstall of an application will. If you are looking for LINQ and Entity Framework, you won't find it. There are some third-party object-relational mapping (ORM) tools that work, but your mileage may vary.
SQL Server
.NET developers are very familiar with using SQL Server as their back-end database. There's nothing special about the code to connect to a SQL Server database in a MonoTouch iOS app. The code to do so is exactly the same as you would use in a .NET server-side or client-side application. However, there are several things that you as a developer need to understand before you open up your SQL Server database and allow a user to directly access it over the Internet.
- You need to ask yourself, and your IT security group, do you really want to open up a SQL Server database to the public Internet? Would a better option be to place some web services in front of the database server, call the web services, and allow the web services to handle the communication with the database? At the very least, this eliminates your SQL Server from being directly exposed on the Internet.
- When making a connection to your SQL Server database, you may need to include the internationalization support for the iPhone. This is necessary to successfully make a connection to the database. This can be done in the Advanced tab of the iPhone Build options of your project, as shown in Figure 18.
- Support for SQL Server isn't included by default with an iPhone project. You'll need to add a reference to Mono.Data.Tds. Mono.Data.Tds is the namespace that provides support for connecting to a SQL Server database.
- You'll need to add a using statement for Mono.Data.Tds.
- Support for connecting to SQL Server via Mono.Data.Tds is listed as experimental within MonoTouch. Thus, unforeseen issues may occur when using Mono.Data.Tds and SQL Server.
- I have preached the value of being async, yet in this example I've shown that you can connect through a sync command. I think that this is OK if your application is connecting to a SQL Server that is only on a local network. This fits with my feeling that this is fine to do with an internal application, such as an iPad application that is used for inventory within a warehouse. The chances of a hiccup are fairly low, though they do happen. If you would like, you can use the ADO.NET async commands.
- MonoTouch's implementation of ADO.NET does not include support for LINQ or Entity Framework.
The code sample in Figure 19 is performing a SQL Server connection. Note how it looks exactly like what we see in .NET.
string strCn = "Data Source=xxxxx;user id=yyyyy;password=zzzzzzz;Initial Catalog=aaaaaaa"; string strSql = "select count(*) from Session"; SqlConnection sqlcn = newSqlConnection(strCn); SqlCommand sqlCm = newSqlCommand(strSql, sqlcn); SqlDataReader sdr; sqlcn.Open(); sdr = sqlCm.ExecuteReader(); if ( sdr.HasRows ) { sdr.Read(); count = Convert.ToInt32(sdr[0]); } if ( sqlcn.State != ConnectionState.Closed ) { sqlcn.Close(); } sqlcn.Dispose(); sqlCm.Dispose(); lblOutput.Text = String.Format("Records Returned: {0}", count);
Get the Data!
I hope that you have enjoyed this article on the UITableView and data in MonoTouch. In it, we've looked at various options for using the UITableView and images and the challenges that lie therein. We also looked at various options to go get data to fill our UITableView. These samples should work with iOS 5, MonoDevelop 2.8, and MonoTouch 5.x. Now go and get that data!
Wallace B. "Wally" McClure ([email protected]) is an ASPInsider, member of the national INETA Speakers Bureau, author of seven programming books, and a partner in Scalable Development. He blogs at and co-hosts the ASP.NET Podcast (). Find Wally on twitter as @wbm. | https://www.itprotoday.com/microsoft-visual-studio/monotouch-tutorial-display-tabular-data-iphone-and-ipad-apps | CC-MAIN-2018-39 | refinedweb | 5,378 | 58.99 |
18 July 2012 11:09 [Source: ICIS news]
Correction: In the ICIS news story headlined " ?xml:namespace>
“The reformer was shut today,” the source said.
JX Nippon Oil is shutting the Mizushima B refinery in Okayama prefecture for safety checks, the Japanese producer said last week.
“PX production in [the] Chita plant will be affected due to insufficient feedstock mixed xylene (MX) supply and the overall [PX] output will be reduced by 15-20%,” the source said, adding that contractual volumes will not be disrupted.
The benzene and mixed xylene units at the Mizushima B refinery produces 110,000 tonnes/year of benzene and 300,000 tonnes/year of isomer grade xylene.The company produces 250,000 tonnes/year of mixed xylene and 400,000 tonnes/year of PX at the Chita-based | http://www.icis.com/Articles/2012/07/18/9579058/corrected-japans-jx-nippon-px-supply-hit-by-mizushima-refinery-shutdown.html | CC-MAIN-2014-35 | refinedweb | 133 | 59.64 |
Whether they’re called “closures” or “lambdas” or “Procs”
or “anonymous functions”, many popular programming languages
have a hidden gotcha when it comes to combining them with loops,
and nearly everyone will fall victim to it at one time or another.
The pitfall is this: because most languages reuse the same variable scope
between individual loop iterations, closures created in different
iterations will capture the same loop variable.
Usually that is not what you want or expect. Most often, all the
closures you created end up seeing the same (final) value for the
loop variable.
Each of the following examples loops from 1 to 3, creating an anonymous
function to return each value. In each case, we might expect
the first of these to return 1, but in practice we see something else
entirely…
k = []
for x in 1..3
k.push(lambda { x })
end
Contrary to naïve expectation, k[0].call() returns 3 instead of 1.
k[0].call()
(This can happen just as easily with blocks passed to methods like Thread.new as it can lambda.)
Thread.new
lambda
k = []
for x in xrange(1, 4):
k.append(lambda: x)
Here, too, k[0]() returns 3.
k[0]()
(Python ranges don’t include the final value.)
my @k = ();
for (my $x = 1; $x < 4; $x++) {
push(@k, sub { return $x; })
}
In this case, $k[0]->() returns 4, the value of $x after the last iteration.
$k[0]->()
$x
k = [];
for (var x = 1; x < 4; x++) {
k.push(function () { return x; });
}
Once again, with a C-style for loop, k[0]() returns 4.
More “functional” approaches won’t necessarily save you either:
k = (1..3).map { |x| lambda { x } }
k[0].call() is still 3.
Edit: This depends. It works fine in Ruby 1.9. In Ruby 1.8,
you will get either 1 or 3 depending on whether x has already been
introduced lexically earlier in the method’s scope (e.g. due to a
preceding assignment or for x ... in). I would not recommend
writing code which is fragile in this way.
for x ... in
k = [lambda: x for x in xrange(1, 4)]
k[0]() is still 3.
my @k = map { sub { return $_; } } (1..4);
Now $k[0]->() is the undefined value (undef()).
undef()
While the problem with capturing re-used scopes is widespread, it is worth
noting that not ALL languages with explicit looping suffer from it:
(define k '())
(do ((x 1 (+ x 1)))
((= x 4) '())
(set! k (cons (lambda () x) k)))
(set! k (reverse k))
((car k)) evaluates to 1, as we expected.
((car k))
(This is for demonstration purposes only. Rest assured that I never normally write Scheme this way.)
Also, in some languages, only some loop constructs reuse scopes:
my @k = ();
for my $x (1..4) {
push(@k, sub { return $x; });
}
Believe it or not, this time $k[0]->() is 1!
So all that being said, what’s the best way to avoid the problem in general?
The best and most general solution is to use a named helper function to
construct your closure in a fresh lexical environment.
def make_value_func(value)
lambda { value }
end
k = (1..3).map { |x| make_value_func(x) }
Now k[0].call() returns 1.
def make_value_func(value):
return lambda: value
k = [make_value_func(x) for x in range(1, 4)]
k[0]() finally returns 1.
sub make_value_func {
my ($value) = @_;
return sub { return $value; };
}
@k = map { make_value_func($_); } (1..4);
$k[0]->() returns 1 here as well.
function make_value_func(value) {
return function () { return value; };
}
var k = [];
for (var x = 1; x < 4; x++) {
k.push(make_value_func(x));
}
And finally, here k[0]() returns 1 in JavaScript too.
The approach has at least two extra advantages beyond dealing with the
problem of creating closures over changing loop variables:. | https://web.archive.org/web/20161105080832/http:/moonbase.rydia.net/mental/blog/programming/the-biggest-mistake-everyone-makes-with-closures.html | CC-MAIN-2017-34 | refinedweb | 630 | 67.35 |
- 05 Oct, 2017 1 commit
- blackst0ne authored
- 16 Aug, 2017 1 commit
- Zeger-Jan van de Weg authored
Main feature was the deprication of the Hashie stuff, so the access by calling keys as method is gone now.
- 17 Feb, 2017 1 commit
- 16 Feb, 2017 1 commit
- 13 Feb, 2017 1 commit
- Oswaldo Ferreira authored
- 16 Dec, 2016 1 commit
- 08 Nov, 2016 2 commits
- Luke Bennett authored
Added spec
- 12 Oct, 2016 1 commit
- Thomas Balthazar authored
The /licenses, /gitignores and /gitlab_ci_ymls endpoints are now also available under a new /templates namespace. Old endpoints will be deprecated when GitLab 9.0.0 is released.
- 16 Aug, 2016 1 commit
- 20 Jun, 2016 4 commits
This commit builds on the groundwork in ee008e300b1ec0abcc90e6a30816ec0754cea0dd, which refactored the backend so the same code could be used for new dropdowns. In this commit its used for templates for the `.gitlab-ci.yml` files.
- ZJ van de Weg authored | https://gitlab.com/gitlab-org/gitlab-foss/-/commits/f72598b659871a3d4e8ef1905918067522ba2a29/lib/api/templates.rb | CC-MAIN-2020-16 | refinedweb | 156 | 66.47 |
Improve this doc
PayPal plugin for Cordova/Ionic Applications
Repo:
$ ionic cordova plugin add com.paypal.cordova.mobilesdk
$ npm install --save @ionic-native/paypal
import { PayPal, PayPalPayment, PayPalConfiguration } from '@ionic-native/paypal';
constructor(private payPal: PayPal) { }
..."
// }
// }
}, () => {
// Error or render dialog closed without being successful
});
}, () => {
// Error in configuration
});
}, () => {
// Error in initialization, maybe PayPal isn't supported or something else
});
version()
Retrieve the version of the PayPal iOS SDK library. Useful when contacting support.
Returns: Promise<string>
Promise<string>
init(clientIdsForEnvironments:)
You must preconnect to PayPal to prepare the device for processing payments.
This improves the user experience, by making the presentation of the
UI faster. The preconnect is valid for a limited time, so
the recommended time to preconnect is on page load.
PayPalEnvironment
set of client ids for environments
Returns: Promise<any>
Promise<any>
prepareToRender(environment:, configuration:)
You must preconnect to PayPal to prepare the device for processing payments.
This improves the user experience, by making the presentation of the UI faster.
The preconnect is valid for a limited time, so the recommended time to preconnect is on page load.
String
available options are "PayPalEnvironmentNoNetwork", "PayPalEnvironmentProduction" and "PayPalEnvironmentSandbox"
PayPalConfiguration
PayPalConfiguration object, for Future Payments merchantName, merchantPrivacyPolicyURL and merchantUserAgreementURL must be set be set
renderSinglePaymentUI(payment)
Start PayPal UI to collect payment from the user.
See
for more documentation of the params.
PayPalPayment
PayPalPayment object
clientMetadataID()
Once a user has consented to future payments, when the user subsequently initiates a PayPal payment
from their device to be completed by your server, PayPal uses a Correlation ID to verify that the
payment is originating from a valid, user-consented device+application.
This helps reduce fraud and decrease declines.
This method MUST be called prior to initiating a pre-consented payment (a “future payment”) from a mobile device.
Pass the result to your server, to include in the payment request sent to PayPal.
Do not otherwise cache or store this value.
renderFuturePaymentUI()
Please Read Docs on Future Payments at
renderProfileSharingUI(scopes)
Please Read Docs on Profile Sharing at
Array<string>
scopes Set of requested scope-values. Accepted scopes are: openid, profile, address, email, phone, futurepayments and paypalattributes
See for more details
amount()
The amount of the payment.
currency()
The ISO 4217 currency for the payment.
shortDescription()
A short description of the payment.
intent()
“Sale” for an immediate payment.
bnCode()
Optional Build Notation code (“BN code”), obtained from [email protected],
for your tracking purposes.
invoiceNumber()
Optional invoice number, for your tracking purposes. (up to 256 characters)
custom()
Optional text, for your tracking purposes. (up to 256 characters)
softDescriptor()
Optional text which will appear on the customer’s credit card statement. (up to 22 characters)
items()
Optional array of PayPalItem objects.
payeeEmail()
Optional payee email, if your app is paying a third-party merchant.
The payee’s email. It must be a valid PayPal email address.
shippingAddress()
Optional customer shipping address, if your app wishes to provide this to the SDK.
details()
Optional PayPalPaymentDetails object
name()
Name of the item. 127 characters max
quantity()
Number of units. 10 characters max.
price()
Unit price for this item 10 characters max.
ISO standard currency code.
sku()
The stock keeping unit for this item. 50 characters max (optional)
subtotal()
Sub-total (amount) of items being paid for. 10 characters max with support for 2 decimal places.
shipping()
Amount charged for shipping. 10 characters max with support for 2 decimal places.
tax()
Amount charged for tax. 10 characters max with support for 2 decimal places.
recipientName()
Name of the recipient at this address. 50 characters max.
line1()
Line 1 of the address (e.g., Number, street, etc). 100 characters max.
line2()
Line 2 of the address (e.g., Suite, apt #, etc). 100 characters max. Optional.
city()
City name. 50 characters max.
state()
2-letter code for US states, and the equivalent for other countries. 100 characters max. Required in certain countries.
ZIP code or equivalent is usually required for countries that have them. 20 characters max. Required in certain countries.
countryCode()
2-letter country code. 2 characters max.
string
Will be overridden by email used in most recent PayPal login.
Will be overridden by phone country code used in most recent PayPal login
Will be overridden by phone number used in most recent PayPal login.
Your company name, as it should be displayed to the user when requesting consent via a PayPalFuturePaymentViewController.
URL of your company's privacy policy, which will be offered to the user when requesting consent via a PayPalFuturePaymentViewController.
URL of your company's user agreement, which will be offered to the user when requesting consent via a PayPalFuturePaymentViewController.
boolean
If set to NO, the SDK will only support paying with PayPal, not with credit cards.
This applies only to single payments (via PayPalPaymentViewController).
Future payments (via PayPalFuturePaymentViewController) always use PayPal.
Defaults to true
number
For single payments, options for the shipping address.
If set to YES, then if the user pays via their PayPal account,
the SDK will remember the user's PayPal username or phone number;
if the user pays via their credit card, then the SDK will remember
the PayPal Vault token representing the user's credit card.
If set to NO, then any previously-remembered username, phone number, or
credit card token will be erased, and subsequent payment information will
not be remembered.
Defaults to YES.
If not set, or if set to nil, defaults to the device's current language setting.
Can be specified as a language code ("en", "fr", "zh-Hans", etc.) or as a locale ("en_AU", "fr_FR", "zh-Hant_HK", etc.).
If the library does not contain localized strings for a specified locale, then will fall back to the language. E.g., "es_CO" -> "es".
If the library does not contain localized strings for a specified language, then will fall back to American English.
If you specify only a language code, and that code matches the device's currently preferred language,
then the library will attempt to use the device's current region as well.
E.g., specifying "en" on a device set to "English" and "United Kingdom" will result in "en_GB".
Normally, the SDK blurs the screen when the app is backgrounded,
to obscure credit card or PayPal account details in the iOS-saved screenshot.
If your app already does its own blurring upon backgrounding, you might choose to disable this.
Defaults to NO.
If you will present the SDK's view controller within a popover, then set this property to YES.
Defaults to NO. (iOS only)
Sandbox credentials can be difficult to type on a mobile device. Setting this flag to YES will
cause the sandboxUserPassword and sandboxUserPin to always be pre-populated into login fields.
Password to use for sandbox if 'forceDefaultsInSandbox' is set.
PIN to use for sandbox if 'forceDefaultsInSandbox' is set. | https://ionicframework.com/docs/native/paypal/ | CC-MAIN-2018-26 | refinedweb | 1,125 | 50.73 |
Code snippets are available in pretty much all code editors these days. They can be a great times saver allowing you to insert commonly used blocks of code for any programming language quickly and easily.
VS Code is no exception and we’ll see exactly how you can create your own custom code snippets to greatly improve your workflow. Not only that but you’ll also learn what polymorphic code snippets are and how useful they can be compared to standard snippets. It’s worth waiting for I promise!
Some implementations of code snippets in other editors can seem a little cryptic to use especially on first exposure. However, in VS Code they’re relatively simple to get the hang of. I was pleasantly surprised to find even dynamic code snippets are pretty straightforward to set up.
So, let’s dive in!
Creating a basic snippet
The process for creating code snippets in VS Code is the same for all programming languages. All custom snippets are stored in JSON files (one for each programming language).
You can access them in VS Code via:
File > Preferences > User Snippets (Windows)
Code > Preferences > User Snippets (macOS)
This displays a drop down list of available languages you can define snippets for. If you’ve already added custom snippets for some languages they appear first in the list for convenience.
Select PHP from the list and a php.json file opens in a new tab inside the editor window. This is where you’ll add your custom snippets for the PHP language.
Each programming language JSON file has a default example in the comments to demonstrate code snippet usage. The example is the same one for all programming languages so isn’t that useful except as a starting point for your first snippet if want to save some typing.
To create a new snippet add a named JSON object to php.json with the following properties:
prefix– Text that triggers the code snippet
description– Displayed in the list of available snippets as you type in the trigger prefix
body– Code snippet content
Here’s a simple example to output the body of a PHP function:
{ "Basic PHP Function": { "prefix": "func", "body": [ "function test() {", "techo "Hello World!";", "}" ], "description": "Outputs a basic PHP function." } }
The snippet name
"Basic PHP Function" is just for your benefit and doesn’t appear outside of the JSON file but the prefix and description fields will be visible so it’s a good idea to pick meaningful values.
If you only want the snippet to output a single line of then the body can just be a simple string. But most often you’ll want it to span multiple lines in which case define the body as an array of strings as in the example above.
Also if you want the resulting code to be nicely indented then add tab characters
t to the beginning of each line as required. Note how we also escaped the double quote characters so we could use them inside the code snippet.
So, now we’ve defined our code snippet how do we use it?
Firstly, no editor restart is necessary. We can start using the new snippet straight away. Open up an existing PHP file or create a new one and start typing out the first couple of letter of
func anywhere after
<?php.
Every code snippet matched will be displayed in a pop-up window. But other matches are also displayed such as built-in matches from the PHP language. You can easily tell which ones are code snippets as these are prefixed by a black box with a white border (bottom border is dotted).
To expand out the code snippet select it from the list and hit the Enter or Tab key.
Did you notice when inserting the snippet that you only see the ones available for the programming language you’re currently editing? This makes searching for code snippets very convenient so you don’t have to wade through lots of irrelevant is also the key to implementing polymorphic code snippets as we’ll see later on.
Going further with code snippets
Outputting static blocks of code is certainly very useful and can save you a ton of typing but we can do even more with VS Code snippets by making them interactive.
Tab stops
Building on our previous example of the PHP function snippet we can use tab stops to navigate to predefined locations in the code snippet and add our own values.
To define a tab stop just insert a dollar character followed by a number anywhere inside the body of the code snippet.
If we go back to our PHP function example from earlier then we can add tab stops for parameters and the string value.
{ "Basic PHP Function": { "prefix": "func", "body": [ "function test( $$1 ) {", "techo "$2";", "}", "", "$3" ], "description": "Outputs a basic PHP function." } }
Now when the snippet is expanded out the cursor jumps to the first tabs top
$1 so you can add a parameter variable. Hitting the tab key again jumps the cursor inside the string value to tab stop
$2.
Tab stop order matters here so if we reversed the numbering of the tab stops then the cursor would jump to the string first and then the function parameter.
Note that
$$1 for the parameter is not a typo. We’re just prefixing the tab stop with a
$ character so it doesn’t have to be entered every time. You can leave this out of course if you wish.
We also added a third tab stop to jump to outside of the function for convenience so we can easily carry on adding new PHP statements outside of the function body.
Placeholders
Rather than simply jump the cursor to predefined points we can also add numbered placeholder text which gets inserted into the snippet by default.
A numbered tab stop placeholder is defined as:
${1:item}
You can read this as the first tab stop with the default text
item. As you cycle through each tab stop you can optionally update the inserted text before tabbing to the next location or leave it at the default value.
{ "Basic PHP Function": { "prefix": "func", "body": [ "function test($${1:name}, $${2:age}, $${3:gender}) {", "techo "Output data: {$${4:name}} {$${5:age}} {$${6:gender}}";", "}", "", "$0" ], "description": "Outputs a basic PHP function." } }
If you don’t alter any of the default placeholder text then the function will be outputted as:
function test($name, $age, $gender) { echo "Output data: {$name} {$age} {$gender}"; }
This is fine if you’re happy with the default placeholder text but if you want to change any of the variables then you have to type in the text twice so that they are matched in both places.
If you have more complicated code snippets with the same variables used in several places inside the snippet then this can soon get tedious. We’ll see how to get around this next.
Variable placeholders
Rather than have numbered tab stops you can also have variable placeholder tab stops. This is great for when you have the same variable defined in multiple locations. Every time you update a variable placeholder it updates in all other locations too.
Let’s modify the example from the previous section to use variable placeholders.
{ "Basic PHP Function": { "prefix": "func", "body": [ "function test($${name}, $${age}, $${gender}) {", "techo "Output data: {$${name}} {$${age}} {$${gender}}";", "}", "", "$0" ], "description": "Outputs a basic PHP function." } }
Now when you trigger the snippet if you update any of the placeholders it automatically updates in the other location too which is exactly what we want!
Placeholder choices
If you’re using numbered placeholders then you can optionally provide users with a choice of values that can be inserted too.
The format for this is:
${1|one,two,three|}
The choices are inserted as a comma separated list surrounded by pipe characters.
An example of using placeholder choices is:
{ "Favorite Color": { "prefix": "favcol", "body": [ "echo "My favorite color is ${1|red,green,orange,blue,indigo|}";", "$0" ], "description": "Outputs your favorite color." } }
When you trigger this code snippet a drop-down list of choices is presented. Just select the one you want and then hit tab to go to the next tab stop.
Polymorphic code snippets
Now that we’ve covered how to implement code snippets in VS Code let’s turn our attention to making them work more efficiently.
First though let’s talk about polymorphism. The big idea has to do with reusability. It is commonly found in situations where something occurs in multiple forms but is available via a common interface.
Polymorphism is kind of a big deal in object-oriented programming (OOP) and there are entire books dedicated to the subject. For our purposes though we can take this idea of reusability and apply it when implementing code snippets for different programming languages that are invoked via a common trigger.
Let’s say that you have code snippets defined in several different programming languages that do the same thing. i.e. the syntax is different for each snippet but the purpose of the code snippet is the same.
One such example could be to output a value of a variable for debugging purposes.
We’ll implement this in PHP and JavaScript but you could easily extend this for other languages too such as C++, Python, Java, Objective-C and so on.
PHP
{ "Output PHP value": { "prefix": "pout", "body": [ "echo "<pre>";", "print_r($${value});", "echo "</pre>";", "$0" ], "description": "Outputs a PHP value to the screen." } }
JavaScript
{ "Output JavaScript value": { "prefix": "jout", "body": [ "console.log(${value});", "$0" ], "description": "Outputs a PHP value to the screen." } }
As we continued to add output code snippets for other programming languages we’d have to remember how we named them for each language.
But the trick is to purposely give them all exactly the same trigger.
PHP
{ "Output PHP value": { "prefix": "out", "body": [ "echo "<pre>";", "print_r($${value});", "echo "</pre>";", "$0" ], "description": "Outputs a PHP value to the screen." } }
JavaScript
{ "Output JavaScript value": { "prefix": "out", "body": [ "console.log(${value});", "$0" ], "description": "Outputs a PHP value to the screen." } }
So now we have a single trigger that contextually outputs a code snippet depending on the type of file you triggered the snippet from. Neat eh?
Try it out for yourself. Start typing
out inside a PHP file. As you can see this triggers the code snippet from php.json and likewise if you do the same from a JavaScript file then the javascript.json
out snippet gets used instead!
Here’s another example to output the same HTML tags from multiple languages. Tab stops are defined to allow the HTML tags to be changed if required.
HTML
{ "Output HTML": { "prefix": "tag", "body": [ "<${h2}>Heading</${h2}>", "<${p}>I wandered lonely as a cloud.</${p}>", "$0" ], "description": "Outputs HTML." } }
PHP
{ "Output HTML Tag": { "prefix": "tag", "body": [ "echo "<${h2}>Heading</${h2}>";", "echo "<${p}>I wandered lonely as a cloud.</${p}>";", "$0" ], "description": "Outputs HTML via PHP." } }
JavaScript
{ "Output HTML Tag": { "prefix": "tag", "body": [ "var heading = \"<${h2}>Heading</${h2}>\";", "var body = \"<${p}>I wandered lonely as a cloud.</${p}>\";", "document.querySelector(\"#${id}\").innerHTML = heading + body;", "$0" ], "description": "Outputs HTML via JavaScript." } }
JSX
"Output HTML Tag": { "prefix": "tag", "body": [ "class ${Component} extends React.Component {", "\trender() {", "\t\treturn (", "\t\t\t<Fragment>", "\t\t\t\t<${h1}>Heading</${h1}>", "\t\t\t\t<${p}>I wandered lonely as a cloud.</${p}>" "\t\t\t</Fragment>", "\t\t)", "\t}", "}", "$0" ], "description": "Outputs HTML via JSX." } }
As before, just start typing out the trigger text (in this case
tag) and you’ll see the relevant code snippet for the type of file you’re currently editing.
Congratulations, you’ve now graduated to the world of polymorphic code snippets!
This approach to developing code snippets is very efficient and can save you from having to remember lot’s of different snippet triggers. Now you only need to remember just a single trigger for snippets that perform a common task.
What’s more, you can create as many of these polymorphic code snippets as you like!. | https://blog.logrocket.com/custom-polymorphic-code-snippets-in-vs-code-e76d8cad656b/ | CC-MAIN-2022-40 | refinedweb | 2,013 | 61.67 |
Red Hat Bugzilla – Bug 207470
Need ability to handle duplicate VG names for Xen or kvm
Last modified: 2015-02-18 06:56:26 EST
Duplicate VG names strike again. Except with Xen, it's worse.
The trouble is that we can have nested PVs inside a LV. In fact, if you install
a Xen guest (or indeed _any_ virtualised image, be it Xen, qemu or whatever)
onto an LV, you're going to end up with a partition inside the LV disk image
which itself contains a PV; and if the user uses default VG naming, then that
nested PV is going to be initialised with the same VG name for every guest (and
the same name as the host's own VG if the user has not overridden this.)
Now, these nested VGs are active parts of a virtual image: there are boot
loaders on the disk image's /boot which refer to the VG by name. So renaming the
VG is not a viable procedure for maintenance purposes if an admin wants to
access the nested VG without breaking the image as a whole.
And it gets worse if an admin tries to snapshot the disk image (a reasonable
thing to do if you want to backup the xen image via LVM snapshots.) In that
case, we end up with two separate LVs, each of which contains a nested PV with
the *SAME* UUID, but not necessarily the same contents --- these are not simply
multiple paths to the same data.
Now, it's probably reasonable to simply error out if we find such duplicate
UUIDs, as long as we do so cleanly --- a user trying to activate an origin and
its snapshot at the same time via kpartx/vgchange is asking for trouble. But
preferable would be a way to activate a vggroup based on one of its PVs.
Even without the handling of duplicated UUIDs, a way to activate a VG and access
LVs from amongst multiple VGs with the same name is neede in order to give
administrative access to data hidden inside a domU.
Version-Release number of selected component (if applicable):
All
I just got burned by this, and I'm not a happy camper. :(
I'd be willing to settle for being able to temporarily rename the nested VG, but
you can't even do that:
$ lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
root os -wi-ao 4.00G
usr os -wi-ao 8.00G
...
Ok, fine, I'll just temporarily rename the nested volume group to something else:
$ vgrename 326MmU-VwzI-jMgG-stSe-4UFY-OcrX-1EcVO3 xos "os" still has active LVs
Okay, logical volumes (which lvs won't show me!) in the nested VG are active.
I'll just deactivate the nested VG by referring to it by UUID:
$ vgchange -a n 326MmU-VwzI-jMgG-stSe-4UFY-OcrX-1EcVO3 "326MmU-VwzI-jMgG-stSe-4UFY-OcrX-1EcVO3" not found
Surprise, surprise: vgrename will accept a UUID for a VG name (even though this
behavior isn't documented anywhere I can find), but vgchange won't.
I'm sympathetic to the problem of handling duplicate VGs (because damn, is that
a thorny problem), but there's no excuse for inconsistencies in the applications
which will accept UUIDs as sources or targets. Pick a scheme that makes sense
(e.g.: any name/path argument of the form "uuid:blah-blah-blah-blah" is looked
up by UUID), implement that scheme consistently, and document very much relevant AFAIK
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
Steven,
I think I understand what you are saying but let me try to summarize.
You would like the ability for an administrator to access LVs on an embedded
domU image file, so you can get at the data on there without having to boot the
domU (which may not even be possible in some cases). I in fact ran into this
recently as well when an install of rawhide on a domU failed to install grub
properly. I was able to work around this by using a procedure similar to this:
and by renaming the volume group. As I recall I hit the bug of the device file
not getting properly updated on the rename, and I will check the latest code for
the bug in comment #1. Did I summarize the original problem correctly?
Will think about ways to improve the situation so maintenance of xen domU images
are simpler with LVM tools. Not sure what the answer is - might go beyond LVM
tool enhancement.....)
Thanks for the info and great suggestions!
CC-ing Peter Jones and will ping him separately to see what he thinks about
detecting virtualized setups and modifying the default vol group name for guests.
>..
One potential option that is relatively easy and gets us a little further is this. Modify vgrename to have a "--force" option on it and allow renames into duplicate VG names.
I don't entirely like this idea, but it's relatively simple to implement in the code from what I can tell. I'm not sure of all potential side-effects yet (e.g. cluster?) The change is not ideal, but allows us to do a sequence like the following.
1. Use vgrename to map the duplicate xen domU VG (e.g. VolGroup00) to a temporary name, using the 'uuid' of the VG
2. Activate the VG, mount LVs, do maintenance, etc
3. Unmount the LVs, deactivate the VG
4. Rename the VG back to the original name.
Currently we allow vgrename away from duplicate VG names, but not back. The current situation is not good because you can get yourself into a situation where you can't get back to a bootable system very easily (step #4 above will fail).
I am looking at alternatives of using the UUID to disambiguate the LV / dm target names but right now that is looking more involved.
From what I can tell, below is what we have today in the virtualization guide for dealing with guest images:
That virtualisation-guide recipe is precisely what won't work with default volume group naming conventions.
As for renaming volume groups, yes, that may provide one solution. But as noted it does risk leaving an image unbootable; and as a general principle of system diagnosis/recovery, users should not have to significantly reconfigure their disks just in order to be able to perform maintenance on them.
Agreed - I've got a setup myself and have been playing with maintenance on my own guest images so I feel the pain!
I'm looking into better solutions such as the code detecting duplicate names and perhaps add something (part of the UUID?) to remove the ambiguity. Making such an LV usable (for maintenance) would require modification of the device mapper namespace but not the on-disk metadata, so this needs some thought and maybe a special option.
This bug appears to have been reported against 'rawhide' during the Fedora 10 development cycle.
Changing version to '10'.
More information and reason for this action is here:
"One potential option that is relatively easy and gets us a little further is
this. Modify vgrename to have a "--force" option on it and allow renames into
duplicate VG names."
Just wanted to second how useful that would be. As others have mentioned, getting proper "duplicate VG name" handling seems, well, more of a long term solution.
Just being able to force rename a vg would make it possible to save otherwise unbootable images without having to copy the image over to a box with a different vg name...
For the "my xen image is fubar'ed and I need to mess with the data to make it boot" situation I just realized an obvious-ish work-around: Run a new box with the rescue image.
Something like
virt-install -n box1b -r 1024 -f /xen/disks/brokendisk -p -l -d --vnc --vcpus=1 -x rescue
...
Anyway, it just saved me a re-install. (Of course the one box that isn't managed completely by puppet etc was the one to go kaboom).
- ask
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.
More information and reason for this action is here:
This bug appears to have been reported against 'rawhide' during the Fedora 12 development cycle.
Changing version to '12'.
More information and reason for this action is here:
"One potential option that is relatively easy and gets us a little further is
this. Modify vgrename to have a "--force" option on it and allow renames into
duplicate VG names."
I'll second that!
My story:
After doing a vgrename, I was able to mount it and go in and fix some files, but on doing vgrename to set it back to VolGroup00, I ran into this problem... Arrrggggg.
Here's my bad remedy all from Dom0, it prevented me from rebuilding from scratch. This worked for me, so do at your own risk. I'm also breezing over stuff quick as if you got yourself in this mess, you should be pretty conferable as a sysadmin.
My mess up started with
I had vgrename VolGroup00 to --> VolGroup00_WET
To make it work again in Xen We have to change the grub.conf and the initrd.img (my case initrd-2.6.18-164.10.1.el5xen.img) along with the /etc/fstab as it will try to fsck VolGroup00 and fail on boot.
1st. We need to mount the boot partition with the initrd*.img and the grub.conf files.
I used:
mount -o loop,offset=32256 /dev/VolGroup_Wet/LogVol_Wet /mnt/LogVol_WET (love offset!)
/mnt/LogVol_WET being my mnt space on Dom0.
In here we will find /grub/grub.conf. Vim in and change the VolGroup00 to VolGroup00_WET(what my NEW VolGroup got vgrename too) Next grab the initrd*.img. My current one was called initrd-2.6.18-164.10.1.el5xen.img. I found out how to change this here [] Thanks!!!! heres a quick copy paste from there.
---------------------------------------------------------------------------
the problem is that the intrd was indeed in the gzip format and further in the cpio format, but the cpio format is not the default "bin" format used by cpio but a "newc" format, so the extraction part was perfect, but when you put it back you change the cpio format and hence the kernel hangs
(collection of files and directories) -> cpio archives -> gzip -> final initrd.img
that means you can give the following commands -
mkdir ~/tmp
cd ~/tmp
cp /boot/initrd.img ./initrd.gz
gunzip initrd.gz
mkdir tmp2
cd tmp2
cpio -id < ../initrd.img
now you should have a lot of files in ~/tmp/tmp2 directories, including a lot of subdirectories like sbin,lib
now do the required changes to the files
then pack the files back into the archive using the following command
cd ~/tmp/tmp2
find . | cpio --create -- ~/tmp/newinitrd
cd ~/tmp
gzip newinitrd
now you would have a newinitrd.gz
rename this now -
mv newinitrd.gz as newinitrd.img
this is the new boot image now !!
----------------------------------------------------------------------------
Move this new image back to your mounted partition. I renamed the old one, just incase.
Whew thats done! unmount.
umount /mnt/LogVol_WET
2nd. Lets Mount root / . Unfortunately offset won't work on this.
I did this like this from the Dom0.
/sbin/kpartx -a /dev/VolGroup_Wet/LogVol_Wet
/usr/sbin/pvscan
/usr/sbin/lvscan
/usr/sbin/vgchange -a y
## Mount it on /mnt/LogVol_WET
mount /dev/VolGroup00_WET/LogVol00 /mnt/LogVol_WET
now go in vim /mnt/LogVol_WET/etc/fstab and change VolGroup00 to VolGroup00_WET(My new volgrp)
DON'T edit your Dom0 fstab.. I did that but luckily noticed and fixed my mistake!!!!
phew... I think that's everything. Now unmount and clean up your kpartx stuff
umount /mnt/LogVol_WET
/usr/sbin/vgchange -a n
/sbin/kpartx -d /dev/VolGroup_Wet/LogVol_Wet
Now I can
/usr/sbin/xm create WET
and I'm golden!
I know this seems like a lot and I learned a lot. I mostly put this here knowing I will loose my notes some time in the future. You have to love Google. I would have been hosed!
Oh yea as I state before all this could have been avoided if vgrename had a "--force" option!!!!!!
Ooops forgot a step.
After you open the initrd img you need to vim in and edit the init file
in here at the bottom you will find references to the /dev/VolGroup00/LogVol00 change to /dev/VolGroup00_WET/LogVol00 (of as needed)
Sorry Kind of important. I didn't know how to edit my comment above.
(In reply to comment #19)
> After doing a vgrename, I was able to mount it and go in and fix some files,
> but on doing vgrename to set it back to VolGroup00, I ran into this problem...
> Arrrggggg.
Sounds complicated though. For editing stuff in your
Xen disk images, use
This bug appears to have been reported against 'rawhide' during the Fedora 14 development cycle.
Changing version to '14'.
More information and reason for this action is here:
Created attachment 439221 [details]
Tool to edit PV fields, including VG name
This is a quick hack to work around the duplicate VG name issue. It is not intended for general use, only as an emergency tool to fix issues as described in this bug report.
If the VG is contained in a single PV, it may be renamed with eg. "pvtool /dev/nbd0p1 vg_name=vg00_tmp".
That tool jst handles lvm1 format - which is deprected and all new installations use new text metadata format.
Anyway, you can rename VG using its UUID already- see "man vgrename"
vgrename Zvlifi-Ep3t-e0Ng-U42h-o0ye-KHu1-nl7Ns4 VolGroup00_tmp
(this should work for lvm1 format too.)
LVM device filtering could be used here to make use of some problematic devices, using the "--config" option for LVM2 commands, like:
--config 'devices { filter = [ "r|/dev/sda|...|" ] }'
..where you can give the list of devices which should be filtered out (devices used as PVs where a duplicate VG is). Then you can manage the problematic VG/LV. However, you can't rely on activation to work since you could already have the duplicate VG activated of course (we would need to do something like Dave suggested already in comment #12, however, I think this is a no-go for now).
As for supporting the --force option to allow renaming to a duplicate name as mentioned in comment #9 - we could probably add support for this... But it's really not very nice.
As for using the libguestfs to access the contents of the image (as mentioned in comment #21), this seems to be the most elegant solution I think (together with defining a filter directly in /etc/lvm/lvm.conf to exclude all LVM devices used as guest images so any LVM setup inside won't interfere with the host LVM setup). However, we need to take into account that libguestfs runs a new VM instance with it's own minimalistic kernel/rootfs and the image we need to access added.
I think we could go with the filters set + libguestfs for now. Would that be satisfactory?
(Just a note, that there's also an RFE filed for using UUID on input of other LVM2 commands: bug #449832)
(In reply to comment #25)
> I think we could go with the filters set + libguestfs for now.
+ guestfish
In case it's not clear from comment 26, libguestfs supports
adjusting the LVM filters:
We use it in virt-df already.
I found another way how to mount images which (may) have duplicate vg names:
1. setup loopback device with the name
2. setup loopback device for data you are going to write to it (must have)
3. use device mapper to map loopback device to dm target, read-only (will allow us to use snapshots)
4. create writeable snapshot of the device from 3.
5. use kpartx to map partitions to new dm devices (kpartx -a <device from 4.>)
6. use vgcloneimport -n my_not_colliding_vg_name /dev/mapper/jkdev_snap2 (replace with output from 5)
7. pvscan, lvscan, vgchange ... all that fun you wish to do with the newly created VG and its LVs
The reason for using this is that vgcloneimport destroys the mapping and corrupts the image (no longer bootable without further changes) by changing UUIDs/names. This way the changes go right to the snapshot and keeps the original image intact.
If you wish to write to the original image while you are doing backup, it would be better to create proper snapshot with snapshot-origins as described in (use dm create ... --notable). For shutdown&examine (my case) this is probably the most simple way.
Clean up via dmsetup remove <device> and losetup -d.
# sample script:
imgfile="/var/lib/libvirt/images/node02.img"
snapfile="/var/lib/libvirt/images/snapshot.img"
devname="jkdev"
snap="jkdev_snap"
loop1="/dev/loop1"
loop2="/dev/loop2"
# loop device
# map loop1 -> image file
# map loop2 -> snapshot file
losetup -r $loop1 $imgfile
dd if=/dev/zero of=$snapfile bs=1M seek=100 count=0
losetup $loop2 $snapfile
# map dm -> loop1
dmsetup -r create $devname --table "0 `blockdev --getsize $loop1` linear $loop1 0"
# snapshot mappings
dmsetup create $snap --table "0 `blockdev --getsize /dev/mapper/$devname` snapshot /dev/mapper/$devname $loop2 p. | https://bugzilla.redhat.com/show_bug.cgi?id=207470 | CC-MAIN-2017-34 | refinedweb | 2,906 | 70.33 |
Visual C++ comes within Microsoft Visual Studio 2008. Visual Studio 2008 also contains Visual Basic, Visual C#, and Visual J#. Using Visual Studio.NET, you can mix and match languages within one "solution". We will, however, focus on developing C++ code throughout these labs.
For your first C++ program, you will build a console mode application that displays a greeting message. This (i.e. a console mode application) is the kind of VC++ programs that you will build for all your lab and class exercises/assignments.
Console mode programs are often simpler to build than Windows applications, and this example will take you through the steps of creating, building and executing a program in Visual C++. We will use the built-in code editor in Visual Studio to edit your code; then we will show you how to build and run your C++ programs.
on your window desktop, choose All Programs
from the popup menu, then choose Microsoft Visual Studio 2008,
and Microsoft Visual Studio 2008
Select Visual C++ Development Settings, then click on Start Visual Studio.
The next thing you will see is the Start Page.
After this, click on "Finish". You will notice that it doesn't appear like anything has changed (you still see the "Start Page"). However, look at the "Solution Explorer" on the left-hand side you will see "Solution 'hello' (1 project)".
You want to add C++ source code to this project.
Select Project --> Add New Item... from the main menu, and select C++ File (.cpp) from the "Templates" section on the right-hand side. Type in the file name: "hello.cpp" in the Name: box. Click on "Add". This file will be added to the hello work space that we have just created, and a blank document will be opened for editing.
Type the following program in the source code editor:
// FILE: hello.cpp // PURPOSE: An example of a simple I/O stream #include <iostream> #include <string> using namespace std; int main() { char name[50]; cout << "Please enter your name" << endl; cin >> name; cout << "Hello, " << name << endl; return 0; }
Save hello.cpp after you have finished editing it.
In order to compile any code in Visual C++, you have to create a project. A project holds three major types of information:
1) It remembers all of the source code files that combine together to create one executable. In this simple example, the file hello.cpp will be the only source file, but in larger applications you often break the code up into several different files to make it easier to understand (and also to make it possible for several people to work on it simultaneously). The project maintains a list of the different source files and compiles all of them as necessary each time you want to create a new executable.
2) It remembers compiler and linker options particular to this specific application. For example, it remembers which libraries to link into the executable, whether or not you want to use pre-compiled headers, and so on.
3) It remembers what type of project you wish to build: a console application, a windows application, etc.
For now we will create a very simple project file and use it to compile hello.cpp.
Compile and Build:
1. Compile the hello project by selecting Build -->
Compile from the main menu.
It simply compiles the source file and forms the object file (hello.obj) for it. It does not perform a link, so it is useful only for quickly compiling a file to check for errors.
2. Select Build --> Build hello from the
menu bar to link the program.
It compiles all of the source files in the project that have been modified since the last build, and then links them to create an executable.
3. Choose Debug -->
to run the program. A DOS window will
popup.
If errors or warnings are displayed in the Build status window, there is probably an error in the source file. Check your source file again for missing semicolons, quotes, or braces.
Since you created your C++ program on C:\Workarea, your files will be erased when you logout. To prevent this, you can save your files on I drive. The I drive is yours.
To do this,click on My Computer on the desktop, under Network Drives, you will see
an image
with your username on it.
Follow the link Access Novell from Home | http://www.cs.uregina.ca/Links/class-info/110/unix/vc.html | CC-MAIN-2018-22 | refinedweb | 735 | 73.88 |
The first release of the dccil compiler included new features required by the CLR, and more have been added in subsequent updates.
Namespaces play an important role in the .NET Framework. They allow the class hierarchy to be extended by multiple third parties without fear of conflicting symbol names. Windows and COM use a 16-byte GUID to uniquely identify components, and this magic number must be recorded in the system registry. On the .NET platform, the concept of namespaces—plus metadata and the hard-and-fast rules about locating assemblies—makes GUIDs obsolete.
Ironically, the idea of a Delphi unit is similar to the CLR's namespaces. It's not too far a leap, if you think of a unit as a container of symbols, and a namespace as a container of units. In Delphi for .NET, the namespace to which a unit belongs is declared in the unit clause:
unit NamespaceA.NamespaceB.UnitA;
The dots indicate the containment of one namespace within another, and ultimately of the unit within the namespace. The dots separate the declaration into components, and each component—up to but not including the rightmost one—is a namespace. The entire declaration taken as a whole, dots and all, is the unit name. The dots simply serve as separators; no new symbols are introduced by the declaration. In this example, NamespaceA.NamespaceB is the namespace, and NamespaceA.NamespaceB.UnitA is the name of the unit. NamespaceA.NamespaceB.UnitA.pas would be the name of the source file, and the compiler would produce an output file called NamespaceA.NamespaceB.UnitA.dcuil.
The program statement (and eventually the package and library statements) optionally declares the default namespace for the entire project. Otherwise, the project is called a generic project, and the default namespace is that specified by the –ns compiler option. If no default project namespace is specified with compiler options, then behavior reverts to not using namespaces, like in Delphi 7 (and prior releases).
The unit clause does not have to declare membership in any explicit namespace. It might look like a traditional Delphi statement:
unit UnitA;
A unit that does not declare membership in a namespace is called a generic unit. Generic units automatically become members of the project namespace. Note, however, that this does not affect the source filename.
In the project file, you can specify a namespaces clause to list a set of namespaces for the compiler to search when it is trying to resolve references to generic units. The namespaces clause must appear immediately after the program (or package or library) statement and before any other clause or block type. The namespaces are separated by commas, and the list is terminated with a semicolon. For example:
program NamespaceA.MyProgram namespaces Foo.Bar, Foo.Frob, Foo.Nitz;
This example adds the namespaces Foo.Bar, Foo.Frob, and Foo.Nitz to the generic unit search space.
This discussion leads up to showing you how the compiler searches for generic units when you build your program. When you use a unit and fully qualify its name with the full namespace declaration, there is no problem:
uses Foo.Frob.Gizmos;
The compiler knows the name of the dcuil file (or the .pas file) in this case. But suppose you only said the following:
uses Gizmos;
This is called a generic unit reference, and the compiler must have a way to find its dcuil file.
The compiler searches namespaces in the following order:
The current unit namespace (if any)
The default project namespace (if any)
The namespaces listed in the project's namespaces clause (if any)
The namespaces specified by compiler options
For the first item, if the current unit specifies a namespace, then subsequent generic unit references in the current unit's uses clause are looked for first in the current unit's namespace. Consider this example:
unit Foo.Frob.Gizmos; uses doodads;
The first search location for the unit doodads would be in the namespace Foo.Frob. So, the compiler would try to open Foo.Frob.Doodads.dcuil. Failing this, the compiler would move on and prefix the unit name doodads with the default project namespace, and so on down the list.
The same symbol name can appear in different namespaces. When such ambiguity occurs, you must refer to the symbol by its full namespace and unit name. If you have a symbol named Hoozitz in unit Foo.Frob.Gizmos, you can refer to the symbol with either
Hoozitz; // if the name is unambiguous Foo.Frob.Gizmos.Hoozitz;
but not with
Gizmos.Hoozitz; // error! Frob.Gizmos.Hoozitz; // error!
Unit and namespace names can become quite long and unwieldy. You can create an alias for the fully qualified name with the as keyword in the uses clause:
uses Foo.Frob.DepartmentOfRedundancyDepartment.UIToys as ToyUnit;
Unit aliases introduce new identifiers, so their names cannot conflict with any other identifiers in the same unit (aliases are local to their unit). Even if you declare an alias, you can still use the original, longer name to refer to the unit.
The cross-language integration of the CTS and CLR brings up some interesting situations for compiler developers. For example, what if the name of an identifier in an assembly is the same as one of your language keywords? Consider the Delphi language keyword type. Type is also the name of a CLR class. Because type is a language keyword, it cannot be used as the name of an identifier. You can avoid this problem two ways in Delphi for .NET (these techniques were not implemented in Delphi 7 and previous versions).
First, you can use the fully qualified name of the identifier:
var T: System.Type;
The second, shorter way is to use the new ampersand operator (&) to prefix the identifier. The following has the same effect as the previous example:
var T: &Type;
In this statement the ampersand tells the compiler to look for a symbol with the name Type and to not consider it as a keyword. The compiler will look for the Type symbol in the available units, finding it in System (the same mechanism works regardless of the unit defining the symbol).
Two more concepts specified by the Common Language Infrastructure (CLI) have been added to the Delphi for .NET compiler: the class attribute sealed and the method attribute final. Putting the sealed attribute on a class effectively ends the class's ability to be used as a base class. Here is a sample code snippet:
type TDeriv1 = class (TBase) procedure A; override; end sealed;
A class cannot derive from a class that has been sealed. Similarly, a virtual method marked with the final attribute cannot be overridden in any descendant class, as in the following sample code.
type TDeriv1 = class (TBase) procedure A; override; final; end; TDeriv2 = class (TDeriv1) procedure A; override; // error: "cannot override a final method" end;
Borland added the sealed and final keywords to map an existing feature of .NET, but why did Microsoft introduce these attributes? The final and sealed attributes give users of your code important insights into how you intend your classes to be used. Moreover, these attributes give the compiler hints that allow it to generate more efficient Common Intermediate Language (CIL).
Delphi's notion of visibility—public, protected, and private—is a bit different from that of the CLI. In languages like C++ and Java, when you specify a visibility of private or protected on a class member, that class member is only visible to descendants of the class in which it is defined. As you saw in Chapter 2, however, Delphi enforces the idea of private and protected only for classes in different units, because everything is visible within a single unit. To be CTS compliant, the language required new visibility specifiers:
class private A member declared with class private visibility follows the C++ and Java rules. That is, class private members can be accessed only in methods or properties of the declaring class. Procedures and functions declared at the unit level and methods of other classes do not have access.
class protected Similarly, class protected members are visible only within the declaring class, and to descendants of the declaring class. Other classes in the same unit have access only if they inherit from this class.
See the ProtectedPrivate example in the LanguageTest folder of the chapter's source code for a trivial test case.
Delphi has long supported class methods—methods you can apply to a class as a whole and also as a specific instance, even if the methods' code cannot refer to the current object (the Self parameter of a class methods references the current class, not the current object). Delphi for .NET extends this idea by adding the class static specifier, class properties, class static fields, and class constructors:
Class Static Methods Like Delphi 7 class methods, class static members can be called without an object instance, and no Self parameter refers to an object. Unlike in Delphi 7, however, you cannot refer to the class itself. For example, calling the ClassName method will fail. Also unlike in Delphi 7, you cannot use the virtual keyword with class static methods.
Class Static Properties Like class methods, class static properties can be accessed without an object instance. The access methods or backing fields for class static properties must be declared class static themselves. Class static properties cannot be published, nor can they have stored or default value definitions.
Class Static Fields A class static field can be accessed without an object instance. Class static fields and properties are typically used as design tools; they allow you to declare variables and constants within the meaningful context of a class declaration.
Class Constructor A class constructor is a private constructor (it must be declared with class private visibility) that runs prior to the first use of the declaring class. The CLR offers no guarantee of when this will happen, except to say it will happen before the first use of the class. In CLR terms, this can get a bit tricky, because code is not considered "used" unless (and until) it is executed. A class can declare only one class constructor. Descendants can declare their own class constructors, but only one can be declared in any class.
You can't call a class constructor from source code; it is called automatically as a way to initialize class static fields and properties. Even the inherited keyword is prohibited, because the compiler takes care of this for you.
The following example class declaration illustrates the syntax for these new specifiers:
TMyClass = class class private // can only be accessed within TMyClass // Class constructor must have class private visibility class constructor Create; class protected // can be accessed in TMyClass and in descendants // Class static accessors for class static property P1, below class static function getP1 : Integer; class static procedure setP1(val : Integer); public // fx can be called without an object instance class static function fx(p : Integer) : Integer; // Class static property P1 must have class static accessors class static property P1 : Integer read getP1 write setP1; end;
Nested types are similar to class fields, in that they can be accessed through a class reference; an object instance is not needed. Declared within the scope of a class, nested types give you a way to use the enclosing class as a kind of namespace for the type.
Delphi has always had the ability to set an event listener—a function that is called when an event is fired. The CLR supports the use of multiple event listeners so that more than one function can respond when an event is fired. These are called multicast events. Delphi for .NET introduces two new property access methods, add and remove, to support multicast events. The add and remove methods can be used only on properties that are events.
To support multicast events, you must have a way to store all the functions that register themselves as listeners. As stated in Chapter 24, multicast events are implemented using the CLR MulticastDelegate class. And, as discussed there, the compiler hides a lot of complexity behind the scenes. The add and remove keywords handle the storage and removal of event listeners, but the containment mechanism is an implementation detail you aren't expected to deal with. The compiler automatically generates add and remove methods for you, and these methods implement storage of event listeners in an efficient way.
In the final release of Delphi for .NET, the add and remove methods should work hand in hand with an overloaded version of the standard functions Include and Exclude. In your source code, when you'd want to register a method as an event listener, you call Include. To remove a method, call Exclude. For example:
Include(EventProp, eventHandler); Exclude(EventProp, eventHandler);
Behind the scenes, Include and Exclude will call the methods assigned to the add and remove access functions, respectively. At the time of this writing, this technology wasn't working, so the book examples don't use it.
To support legacy code, the Delphi assignment operator (:=) still works as a way to assign a single event handler. The compiler generates code to go back and replace the last event handler (and only that event handler) that was set with the assignment operator. The assignment operator works separately and independently from the add/remove (or Include/Exclude) mechanism. In other words, the use of the assignment operator does not affect the list of event handlers that have been added to the MulticastDelegate.
As an example, you can refer to the XmlDemo program. The following code snippet (the working code at the time of this writing) creates a button at run time and installs two event handlers for its Click event:
MyButton := Button.Create; MyButton.Location := Point.Create ( Width div 2 - MyButton.Width div 2, 2); MyButton.Text := 'Load'; MyButton.add_Click (OnButtonClick); MyButton.add_click (OnButtonClick2); Controls.Add (MyButton);
Recall from Chapter 24 that one of the requirements of the CLI is an extensible metadata system. All .NET language compilers are required to emit metadata for the types defined within an assembly. The extensible part of extensible metadata means that programmers can define their own attributes and apply them to just about anything: assemblies, classes, methods, and more. The compiler emits these into the assembly's metadata. At run time, you can query for the attributes that were applied to an entity (assembly, class, method, and so on) using the methods of the CLR class System.Type.
Custom attributes are reference types derived from the CLR class System.Attribute. Declaring a custom attribute class is just like declaring any other class (this code snippet is extracted from the trivial NetAttributes project part of the LanguageTest folder):
type TMyCustomAttribute = class(TCustomAttribute) private FAttr : Integer; public constructor Create(val: Integer); property customAttribute : Integer read FAttr write FAttr; end; ... constructor TMyCustomAttribute.Create(val: Integer) begin inherited Create; customAttribute := val; end;
The syntax for applying the custom attribute is similar to that of C#:
type [TMyCustomAttribute(17)] TFoo = class public function P1(X : Integer) : Integer; end;
The custom attribute is applied to the construct immediately following it. In the example, it is applied to the class TFoo. No doubt you noticed that the custom attribute syntax is nearly identical to that of Delphi's GUID syntax. Here we have a problem: GUIDs are applied to interfaces; they must immediately follow the interface declaration. Custom attributes, on the other hand, must immediately precede the declaration to which they apply. How can the compiler determine whether the thing in the square brackets is a traditional Delphi-style GUID (which should be applied to the preceding interface declaration) or a .NET-style custom attribute (which should be applied to the first member of the interface)?
There is no way to tell, so you have to punt—make a special case for custom attributes and interfaces. If you apply a GUID to an interface, it must immediately follow the declaration of the interface, and it must follow the established Delphi syntax:
type interface IMyInterface ['(12345678-1234-1234-1234-1234567890ab)']
CLR's GuidAttribute custom attribute is used to apply GUIDs; it is part of the System.Runtime.InteropServices namespace. If you use this custom attribute to apply a GUID, then you must follow the CLR standard and put the attribute declaration before the interface.
Class helpers are an intriguing new language feature added to Delphi for .NET. The main reason for supporting class helpers is the way Borland maps .NET core classes with its own RTL classes, as covered later in the section "Class Helpers for the RTL." Here I will focus on this feature from a language perspective.
A class helper gives you a way to extend a class without using derivation, by adding new methods (but not new data). The odd fact, compared to inheritance, is that you can create objects of the original class, which is extended maintaining the same name. This means you can plug-in methods to an existing object of an existing class. A simple example will help clarify the idea.
Suppose you have a class (probably one you haven't written yourself—otherwise you could have extended it right away) like this:
type TMyObject = class private Value: Integer; Text: string; public procedure Increase; end;
Now you can add a Show method to objects of this class by writing a class helper to extend it:
type TMyObjectHelper = class helper for TMyObject public procedure Show; end; procedure TMyObjectHelper.Show; begin WriteLn (Text + ' ' + IntToStr (Value) + ' -- ' + Self.ClassType.ClassName + ' -- ' + ToString); end;
Notice that Self in the class helper method is the object of the class that the helper is for. You can use it like this:
Obj := TMyObject.Create; ... Obj.Show;
You'll end up seeing the name of the TMyObject class in the output. If you inherit from the class, however, the class helper will also be usable on the derived class (so you end up adding a method to an entire hierarchy), and everything will work properly. For your experiments, refer to the ClassHelperDemo example in the LanguageTest folder. | http://etutorials.org/Programming/mastering+delphi+7/Part+IV+Delphi+the+Internet+and+a+.NET+Preview/Chapter+25+Delphi+for+.NET+Preview+The+Language+and+the+RTL/New+Delphi+Language+Features/ | CC-MAIN-2017-30 | refinedweb | 3,017 | 52.8 |
I am a beginner to dapper . I was going through the code and building samples . But I am having problems in retrieving data . My code is as follows
Console.WriteLine("Reading Values"); string readSatement = "select * from employee where Id=@Id "; IEnumerable<Employee> objEmp1 = con.Query<Employee>(readSatement, new { Id = empId }); var objEmp2 = con.Query(readSatement, new { Id = empId });
In this code objEmp2 retrieves values from db for the id passed . But objEmp1 gives null values for the attributes of the object .
Employee class is as below
public class Employee { public int EmpId { get; set; } public string EmpName { get; set; } public int EmpAge { get; set; } }
Whats wrong with the code .
You need to ensure all your database columns either match the properties in your class you are using for the query or you return the columns with names that match. For example in your query above, I believe you might want to write it like:
select Id as EmpId, otherColumn as Propertyname, etc.. from employee where Id = @Id | https://dapper-tutorial.net/knowledge-base/13080523/dapper---mapper-query-not-getting-values-where-as-dynamic-object-mapper-query-does | CC-MAIN-2021-04 | refinedweb | 167 | 66.54 |
Explore object oriented programming with classes and objects
In this tutorial, you'll build a console application and see the basic object-oriented features that are part of the C# language.
Prerequisites
- We recommend Visual Studio for Windows or Mac. You can download a free version from the Visual Studio downloads page. Visual Studio includes the .NET SDK.
- You can also use the Visual Studio Code editor. You'll need to install the latest .NET SDK separately.
- If you prefer a different editor, you need to install the latest .NET SDK.
Create your application
Using a terminal window, create a directory named classes. You'll build your application there. Change to that directory and type
dotnet new console in the console window. This command creates your application. Open Program.cs. It should look like this:
// See for more information Console.WriteLine("Hello, World!");
In this tutorial, you're going to create new types that represent a bank account. Typically developers define each class in a different text file. That makes it easier to manage as a program grows in size. Create a new file named BankAccount.cs in the Classes directory.
This file will contain the definition of a bank account. Object Oriented programming organizes code by creating types in the form of classes. These classes contain the code that represents a specific entity. The
BankAccount class represents a bank account. The code implements specific operations through methods and properties. In this tutorial, the bank account supports this behavior:
- It has a 10-digit number that uniquely identifies the bank account.
- It has a string that stores the name or names of the owners.
- The balance can be retrieved.
- It accepts deposits.
- It accepts withdrawals.
- The initial balance must be positive.
- Withdrawals can't result in a negative balance.
Define the bank account type
You can start by creating the basics of a class that defines that behavior. Create a new file using the File:New command. Name it BankAccount.cs. Add the following code to your BankAccount.cs file:
namespace Classes; public class BankAccount { public string Number { get; } public string Owner { get; set; } public decimal Balance { get; } public void MakeDeposit(decimal amount, DateTime date, string note) { } public void MakeWithdrawal(decimal amount, DateTime date, string note) { } }
Before going on, let's take a look at what you've built. The
namespace declaration provides a way to logically organize your code. This tutorial is relatively small, so you'll put all the code in one namespace.
public class BankAccount defines the class, or type, you're creating. Everything inside the
{ and
} that follows the class declaration defines the state and behavior of the class. There are five members of the
BankAccount class. The first three are properties. Properties are data elements and can have code that enforces validation or other rules. The last two are methods. Methods are blocks of code that perform a single function. Reading the names of each of the members should provide enough information for you or another developer to understand what the class does.
Open a new account
The first feature to implement is to open a bank account. When a customer opens an account, they must supply an initial balance, and information about the owner or owners of that account.
Creating a new object of the
BankAccount type means defining a constructor that assigns those values. A constructor is a member that has the same name as the class. It's used to initialize objects of that class type. Add the following constructor to the
BankAccount type. Place the following code above the declaration of
MakeDeposit:
public BankAccount(string name, decimal initialBalance) { this.Owner = name; this.Balance = initialBalance; }
The preceding code identifies the properties of the object being constructed by including the
this qualifier. That qualifier is usually optional and omitted. You could also have written:
public BankAccount(string name, decimal initialBalance) { Owner = name; Balance = initialBalance; }
The
this qualifier is only required when a local variable or parameter has the same name as that field or property. The
this qualifier is omitted throughout the remainder of this article unless it's necessary.
Constructors are called when you create an object using
new. Replace the line
Console.WriteLine("Hello World!"); in Program.cs with the following code (replace
<name> with your name):
using Classes; var account = new BankAccount("<name>", 1000); Console.WriteLine($"Account {account.Number} was created for {account.Owner} with {account.Balance} initial balance.");
Let's run what you've built so far. If you're using Visual Studio, Select Start without debugging from the Debug menu. If you're using a command line, type
dotnet run in the directory where you've created your project.
Did you notice that the account number is blank? It's time to fix that. The account number should be assigned when the object is constructed. But it shouldn't be the responsibility of the caller to create it. The
BankAccount class code should know how to assign new account numbers. A simple way is to start with a 10-digit number. Increment it when each new account is created. Finally, store the current account number when an object is constructed.
Add a member declaration to the
BankAccount class. Place the following line of code after the opening brace
{ at the beginning of the
BankAccount class:
private static int accountNumberSeed = 1234567890;
The
accountNumberSeed is a data member. It's
private, which means it can only be accessed by code inside the
BankAccount class. It's a way of separating the public responsibilities (like having an account number) from the private implementation (how account numbers are generated). It's also
static, which means it's shared by all of the
BankAccount objects. The value of a non-static variable is unique to each instance of the
BankAccount object. Add the following two lines to the constructor to assign the account number. Place them after the line that says
this.Balance = initialBalance:
this.Number = accountNumberSeed.ToString(); accountNumberSeed++;
Type
dotnet run to see the results.
Create deposits and withdrawals
Your bank account class needs to accept deposits and withdrawals to work correctly. Let's implement deposits and withdrawals by creating a journal of every transaction for the account. Tracking every transaction has a few advantages over simply updating the balance on each transaction. The history can be used to audit all transactions and manage daily balances. Computing the balance from the history of all transactions when needed ensures any errors in a single transaction that are fixed will be correctly reflected in the balance on the next computation.
Let's start by creating a new type to represent a transaction. The transaction is a simple type that doesn't have any responsibilities. It needs a few properties. Create a new file named Transaction.cs. Add the following code to it:
namespace Classes; public class Transaction { public decimal Amount { get; } public DateTime Date { get; } public string Notes { get; } public Transaction(decimal amount, DateTime date, string note) { Amount = amount; Date = date; Notes = note; } }
Now, let's add a List<T> of
Transaction objects to the
BankAccount class. Add the following declaration after the constructor in your BankAccount.cs file:
private List<Transaction> allTransactions = new List<Transaction>();
Now, let's correctly compute the
Balance. The current balance can be found by summing the values of all transactions. As the code is currently, you can only get the initial balance of the account, so you'll have to update the
Balance property. Replace the line
public decimal Balance { get; } in BankAccount.cs with the following code:
public decimal Balance { get { decimal balance = 0; foreach (var item in allTransactions) { balance += item.Amount; } return balance; } }
This example shows an important aspect of properties. You're now computing the balance when another programmer asks for the value. Your computation enumerates all transactions, and provides the sum as the current balance.
Next, implement the
MakeDeposit and
MakeWithdrawal methods. These methods will enforce the final two rules: the initial balance must be positive, and any withdrawal must not create a negative balance.
These rules introduce the concept of exceptions. The standard way of indicating that a method can't complete its work successfully is to throw an exception. The type of exception and the message associated with it describe the error. Here, the
MakeDeposit method throws an exception if the amount of the deposit isn't greater than 0. The
MakeWithdrawal method throws an exception if the withdrawal amount isn't greater than 0, or if applying the withdrawal results in a negative balance. Add the following code after the declaration of the
allTransactions list:
public void MakeDeposit(decimal amount, DateTime date, string note) { if (amount <= 0) { throw new ArgumentOutOfRangeException(nameof(amount), "Amount of deposit must be positive"); } var deposit = new Transaction(amount, date, note); allTransactions.Add(deposit); } public void MakeWithdrawal(decimal amount, DateTime date, string note) { if (amount <= 0) { throw new ArgumentOutOfRangeException(nameof(amount), "Amount of withdrawal must be positive"); } if (Balance - amount < 0) { throw new InvalidOperationException("Not sufficient funds for this withdrawal"); } var withdrawal = new Transaction(-amount, date, note); allTransactions.Add(withdrawal); }
The
throw statement throws an exception. Execution of the current block ends, and control transfers to the first matching
catch block found in the call stack. You'll add a
catch block to test this code a little later on.
The constructor should get one change so that it adds an initial transaction, rather than updating the balance directly. Since you already wrote the
MakeDeposit method, call it from your constructor. The finished constructor should look like this:
public BankAccount(string name, decimal initialBalance) { Number = accountNumberSeed.ToString(); accountNumberSeed++; Owner = name; MakeDeposit(initialBalance, DateTime.Now, "Initial balance"); }
DateTime.Now is a property that returns the current date and time. Test this code by adding a few deposits and withdrawals in your
Main method, following the code that creates a new
BankAccount:
account.MakeWithdrawal(500, DateTime.Now, "Rent payment"); Console.WriteLine(account.Balance); account.MakeDeposit(100, DateTime.Now, "Friend paid me back"); Console.WriteLine(account.Balance);
Next, test that you're catching error conditions by trying to create an account with a negative balance. Add the following code after the preceding code you just added:
// Test that the initial balances must be positive. BankAccount invalidAccount; try { invalidAccount = new BankAccount("invalid", -55); } catch (ArgumentOutOfRangeException e) { Console.WriteLine("Exception caught creating account with negative balance"); Console.WriteLine(e.ToString()); return; }
You use the
try and
catch statements to mark a block of code that may throw exceptions and to catch those errors that you expect. You can use the same technique to test the code that throws an exception for a negative balance. Add the following code before the declaration of
invalidAccount in your
Main method:
// Test for a negative balance. try { account.MakeWithdrawal(750, DateTime.Now, "Attempt to overdraw"); } catch (InvalidOperationException e) { Console.WriteLine("Exception caught trying to overdraw"); Console.WriteLine(e.ToString()); }
Save the file and type
dotnet run to try it.
Challenge - log all transactions
To finish this tutorial, you can write the
GetAccountHistory method that creates a
string for the transaction history. Add this method to the
BankAccount type:
public string GetAccountHistory() { var report = new System.Text.StringBuilder(); decimal balance = 0; report.AppendLine("Date\t\tAmount\tBalance\tNote"); foreach (var item in allTransactions) { balance += item.Amount; report.AppendLine($"{item.Date.ToShortDateString()}\t{item.Amount}\t{balance}\t{item.Notes}"); } return report.ToString(); }
The history uses the StringBuilder class to format a string that contains one line for each transaction. You've seen the string formatting code earlier in these tutorials. One new character is
\t. That inserts a tab to format the output.
Add this line to test it in Program.cs:
Console.WriteLine(account.GetAccountHistory());
Run your program to see the results.
Next steps
If you got stuck, you can see the source for this tutorial in our GitHub repo.
You can continue with the object oriented programming tutorial.
You can learn more about these concepts in these articles:
Feedback
Submit and view feedback for | https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/tutorials/classes | CC-MAIN-2022-40 | refinedweb | 1,999 | 50.33 |
Hide Forgot
Looks like the libgdbm allocates dynamic memory, then writes
it out to the GDBM file without fully initializing the
entire memory buffer. Observe:
#include <gdbm.h>
#include <stdlib.h>
int main()
{
GDBM_FILE g;
char *foo;
int i;
foo=malloc(8192);
for (i=0; i<8192; i++)
foo[i]= "secret"[i % 6];
free(foo);
g=gdbm_open("foo.dat", 0, GDBM_WRCREAT, 0644, 0);
gdbm_close(g);
}
Link this with -lgdbm, and run it. Afterwards, examine
foo.dat. It should have "secret" splattered all over it.
This was tested with gdbm-1.7.3-19 and glibc-2.1.1-6. All
versions of Red Hat are probably vulnerable.
There are security implications here. If your app handles
sensitive information, like passwords, in dynamic memory,
and then frees it, and if the freed memory is recycled and
allocated by libgdbm, your sensitive information can end up
being splattered in unused portions of your GDBM files.
Earlier this year MS got reamed for doing the same thing
with some Office files.
I ran some tests, it appears that libdb and libdb1 are not
vulnerable to the same problem.
Fixed in gdbm-1.8.0-2. The header (block 0) was not initializing
memory to 0. | https://bugzilla.redhat.com/show_bug.cgi?id=4457 | CC-MAIN-2020-10 | refinedweb | 202 | 60.31 |
$95.20.
Premium members get this course for $168.80.
Premium members get this course for $349.00.
Premium members get this course for $79.20.
If you can totally avoid using an MFC calls, or any MFC support at all, then you can use RE 2.0 Otherwise forget it.
Since you are not able to do what you want (set background color) anyway, just remove the hack. Or better yet, start a clean new AppWizard program an try some tests. I think you'll see that MFC works perfectly with RE 1.0 and will load your files in 500 milliseconds.
-- Dan
IMHO it is MS RichEdit control limitation so nothing you can do to accelerate this.
GOOD LUCK
Enroll today in this bundle of courses to gain experience in the logistics of pen testing, Linux fundamentals, vulnerability assessments, detecting live systems, and more! This series, valued at $3,000, is free for Premium members, Team Accounts, and Qualified Experts.
Can I write a CLLBACK function that overrides the one in the MSDN?
I know how to make an application with CRichEditView.
Pay attention to what I'm asking!
I beg your pardon for commenting with an alternative.
Wish You All The Best
Roshmon
ITextDocument::Open call and see if there is any
speed improvement.
Something like (and this is a big ugly guess
'cause I haven't check this code).
#include <atlbase.h>
#include <tom.h>
BOOL LoadFile(CRichEditCtrl& edit, LPCTSTR pzFilename)
{
CComVariant varName(pzFilename);
CComPtr<IRichEditOle> pRichEditOle;
pRichEditOle.Attach(edit.G
CComQIPtr<ITextDocument> pTextDocument(pRichEditOle
return SUCCEEDED(pTextDocument->O
}
Now that's a great way to treat someone who is trying to help you. Abusive users just aren't worth the effort...
I think I'll just leave you alone also.....
Can you please explain me your suggestion.
I see that you use a COM interface, is this is an object that I should add to the project in case I'm doing an installation program to my prorject?
>that I should add to the project in case I'm
>doing an installation program to my prorject?
No need to add anything. Just try the code I gave you.
All RichEdit controls have that COM interface hidden
deep within their dark heart.
good luck.
I inserted your code and I got these errors:
about the #include <tom.h> - no such file
When I removed this include I got this error:
GetOleInterface' : is not a member of 'CRichEditCtrl'
Well, I've successfuly compiled it, but it seems that it doesn't work.
The window is opened but with no content.
any idea?
I'm sorry if you were hurt from what I told you, I didn't mean to.
I appreciate your good will.
I would be glad if you could help me.
1) Check that the RTF file you are opening will open happily
in WORDPAD.
2) try replacing the last line with...
HRESULT hr = pTextDocument->Open(varNam
return SUCCEEDED(hr);
(Note the extra flag tomRTF)
3) let me know what the value of the "hr" result is.
4) Zip the project up and email to ggrundy@bigpond.com
Cheers
I tried what you offered, and it seems a little bit better, but still this is slow.
If you take a file of 100k or more it takes few seconds.
Also, I want to view *.s files that they are text files, and I gor hr result which says that he can't find the files.
If WordPad can't open your document with the response times your require your only option will be to use some form of "Just in Time" demand loading of a roll-your-own text control, or maybe load your RichEdit control up in a lowpriority background thread, so that at least the user isn't stuck, waiting around.
PS
There is a tomText, a tomHTML & a tomWordDocument
flag which you can also use. But I understand if you
leave them out the interface will try to figure out
the correct format dynamically.
But I look at the visual studio's speed.
How did they do it?
It is probably, as I suggested, a roll-you-own edit window using a just-in-time loading technique.
Also, you can increase load speed as follows:
Read a large block of the file into RAM: I tested by reading the entire 100K file into a buffer. Then in your callback, you don't need to do individual 4K disk reads chunks as requested. Instead, just transfer part of the 100K block.
In summary, do all of the file I/O at one time, then use high-speed memory block transfers in the callback. After that you are limited by the speed at which the RichText control can process the incoming data -- which will be faster if you are processing regular text than if you are processing RTF.
I can provide code if you can't understand what I am suggesting.
-- Dan
typedef struct {
char* pBuf;
int nCur;
int nMax;
} MyReCallbackData;
static DWORD CALLBACK
MyStreamInCallback(DWORD dwCookie, LPBYTE pbBuff, LONG cb, LONG *pcb)
{
MyReCallbackData* pr= (MyReCallbackData*)dwCooki
if ( pr->nCur >= pr->nMax ) {
*pcb= 0;
return(0);
}
*pcb= cb; // assume normal block
if ( pr->nCur + cb > pr->nMax ) { // else final partial block
*pcb= pr->nMax - pr->nCur;
}
memmove( pbBuff, pr->pBuf+pr->nCur, *pcb );
pr->nCur += *pcb;
return 0; // 0= success
}
void CMyDlg::OnButton1()
{
MyReCallbackData rMRCD;
EDITSTREAM rES;
CFile cFile( "c:\\temp\\testfile.rtf", CFile::modeRead );
//------------------------
rMRCD.nMax= cFile.GetLength();
rMRCD.nCur= 0;
rMRCD.pBuf= new char[ rMRCD.nMax ];
cFile.Read( rMRCD.pBuf, rMRCD.nMax );
// dox are wrong: must set or limits to 32k!
m_ctlRichEdit.LimitText( rMRCD.nMax );
rES.dwCookie= (DWORD)&rMRCD;
rES.pfnCallback= MyStreamInCallback;
m_ctlRichEdit.StreamIn( SF_RTF, rES );
}
-- Dan
First of all, thanks.
It is still as slow as it was.
The part of reading the data using the CALLBACK function is slow.
I see that the block size of reading are still 4K.
Maybe I miss something here?
The 4K block is the amount of data that the control requests. It is not possible to control the block size. Also, it is irrelevant because the requests come very quickly. My technique minimizes disk access overhead which will speed things up, but probably not a lot unless you have a really slow disk.
I think that perhaps you are running a very slow computer. In that case, never fear because when your program runs on other computers it will run very quickly.
-- Dan
also speed of loading depends on document complexity.
It doesn't read the document in a "blind" way?
Dan - I don't know, but it is as slow as it was before.
I have a 550MHz CPU!!!
Also, I try to read a text file so I changed the code to be:
m_ctlRichEdit.StreamIn( SF_TEXT, rES );
It has any connection?
I mean - number of styles frames, pictures, ole objects, tables and so on...
RTF format (and similar formats too) required lot of work to transfer stream into the text on the screen.
shmuetal,
I don't suppose that you are using a non-English version or Windows or importing DBCS or UNICODE data are you?
Here are some other variables could affect RE performance:
* Version of RE control
* Other simultaneous processes hogging the CPU
* Network latency: Is the file on a local hard disk?
Disk latency: Is the disk slow?
* RE Control settings: Are you using any oddball settings or callbacks or notif masks or anything? My testing assumed ALL defaults: I added an RE control to a dialog-based MFC app and then added just the code that I provided here (plus AfxInitRichEdit in InitInstance)
-- Dan
Do you have any influence over the format in which the documents are stored in the first place?
If so, you will find that "Word for Window 6.0 is a slightly faster format.
(tomWordDocument)
-My window also supports hebrew.
-The rich edit control I'm using is a CRichEditView in an MDI application. This is the view of my program.
-The file that I'm trying to read is local.
-Regarding simultaneous processes: well this is a workstaion ad I have the MSDN open and the Outlook and also the regular services running.
-The version of the RE control is what I got from the visual studio as I define this application to have a CRichEditView as a view.
-My disk is 5400 RPM, the older one - do you think that this is the reason?
I'm sorry to not answering you on Friday or Satursay, but we do not work in those days.
Thanks
cFile.Read( rMRCD.pBuf, rMRCD.nMax );
Single step across that line... that is how long it takes to load the file into memory.
Next, single step down to the line:
m_ctlRichEdit.StreamIn( SF_RTF, rES );
and single-step across it... That is how long it takes for the RE control to process the data.
I have heard other users complain about RE being slow but never seen any evidence. However, the hebrew support could easily throw a monkey wrench into the works. I think it will need to right-justify everything in the file... a VERY time-consuming task.
Is there any way to turn off hebrew support in order to see if that is what's slowing things down?
-- Dan
I checked the what you say and the time consumer is the:
"WorkingRTF.StreamIn(SF_RT
So it is not the disk!
It seems that the control is the problem.
Maybe if I take the rich edit v2 or v3 it will be better?
I don't think that the Hebrew is the problem, but maybe you right. I will try to find a way to disable it.
can you show part your RTF file?
I dont have any RTF file.
I just takes a text file or .cpp file and trying to view it in the control.
The 150k file I try to view is a text file and if you want I can send you it.
why you use SF_RTF flag to load just text (cpp) file?
-- Dan
The cpp file is processed in a function I wrote and I color the text like in the Visual Studio Editor so it must be RTF.
may be main problem in your processor?
You know what, I will check it on another computer!
hmm I mean that preprocessor of the cpp file may slow your loading
or you insert plan cpp file as RTF?
To test, open up WordPad and create some text. Then highlight a few passages and make them bold. Makes ome other stuff italic. Now copy and paste until you have a 100KB file. Save it and then try loading it in your program.
If the headers of the RTF are complicated (by defining many styles and fonts and so forth) the RTF processing will take much longer.
-- Dan.
If I read from an RTF file, it pops up immedialtly no matter what's the size of the file.
When I open a CPP file and do what I do it takes more time.
For example I opened an RTF file that his size is 250 KB and it took right away.
Then, I opened a CPP file that his size is 40 KB and it took a bit more.
What do you say?
Should I convert my files into RTF files first?
It is not diferenet from what you know.
If I put at the szSourceFile a string that is a path of an RTF file the part of running the callback function is very fast.
// Load source file text
if(!sourceFile.Open(szSour
{
//If fail to open file
#ifdef _DEBUG
char szErrMsg[255];
ex.GetErrorMessage(szErrMs
afxDump << "Fail to open file: " << szErrMsg << "\n";
#endif // _DEBUG
return;
}
// Process code
tokenizer.SetLanguageInfo(
tokenizer.SetText(&szCode)
tokenizer.FindTokens();
// Export result
MyReCallbackData rMRCD;
EDITSTREAM rES;
rMRCD.nMax= sourceFile.GetLength();
rMRCD.nCur= 0;
rMRCD.pBuf= new char[ rMRCD.nMax ];
sourceFile.Read( rMRCD.pBuf, rMRCD.nMax );
// dox are wrong: must set or limits to 32k!
WorkingRTF.LimitText( rMRCD.nMax );
rES.dwCookie= (DWORD)&rMRCD;
rES.pfnCallback= MyStreamInCallback;
WorkingRTF.StreamIn( SF_RTF, rES );
posBlock = tokenizer.GetBlocks()->Get
cf.dwMask = CFM_COLOR;
while (posBlock != NULL)
{
blockItem = tokenizer.GetBlocks()->Get
switch (blockItem.blockType)
{
case Normal:
cf.crTextColor = pLI->crNormalColour;
break;
case Comment:
cf.crTextColor = pLI->crCommentColour;
break;
case String:
cf.crTextColor = pLI->crStringColour;
break;
case Keyword:
cf.crTextColor = pLI->crKeywordColour;
break;
case Operator:
cf.crTextColor = pLI->crOperatorColour;
break;
default:
cf.crTextColor = pLI->crNormalColour;
break;
}
WorkingRTF.SetSel(blockIte
WorkingRTF.SetSelectionCha
WorkingRTF.SetSel(0,0);
}
delete [] rMRCD.pBuf;
WorkingRTF.UnlockWindowUpd
sourceFile.Close();
Are you saying that regular RTF loads quickly? Are you saying that the delay has been caused by some outside tokenizing process?
shmuelal,
The delay is in the tokenizer object and/or the code you execute after streaming in the text.
Why have you been wasting our time?
-- Dan
even if I put my tokenizer in comment, it has the same perfomance.
That's what I try to explain to you.
Try to take a large cpp file and try to use your code to load this file to the richedit control.
I'm sorry for wasting your time, but I'm sure there is a problem.
Lets go back one step. If you use:
WorkingRTF.StreamIn(SF_TEX
to read in the cpp file, does it go quickly? If so, then use it. Read a text file as text.
-- Dan
there is an improvement.
but now I can't colorize the code.
so you have to convert cpp to rtf during loading stream into the control
Can I do the convertion from cpp file to rtf file in the memory and not to create an RTF file?
Whaat makes you think thaat? I don't suppose that you took a few seconds to try, did you? In this I read in a 150KB cpp file, and then colorized some code.
CFile cFile( "c:\\temp\\testfile.cpp", CFile::modeRead );
...
m_ctlRichEdit.StreamIn( SF_TEXT, rES );
m_ctlRichEdit.SetSel(20,50
CHARFORMAT rCF;
rCF.dwMask= CFM_STRIKEOUT|CFM_BOLD |CFM_COLOR;
rCF.dwEffects= CFE_BOLD;
rCF.crTextColor= RGB(255,0,0);
m_ctlRichEdit.SetSelection
-- Dan
I think that this is it.
Give me few hours with this and if I don't have any problem I will accept your comments.
Can you try to load a .txt file with a sze of about 150 KB.
I tryed to read it with SF_TEXT and it takes long time.
I'm using the code you sent me and this is the only thing I do, which mean loading the file into the control.
It takes almost 10 seconds the part of the calling to the CALLBACK function.
Thanks
-- Dan
I checked it on another computer that is just with English and it was very fast.
What is going on here? you right, I think that it relates in a certain way to the Hebrew that installed in my computer.
It means that I have to find a solution to this problem too, because this application will be used on computer that also has Hebrew enabled and if it will be load like it is now I it wouldn't be good.
Maybe selecting a different default font before loading will make a difference. Maybe setting a different default paragraph format before loading will make a difference.
Try this. Before calling StreamIn(), call GetDefaultCharFormat(). Report here what is returned in the CHARFORMAT structure. Also call GetParaFormat() and report here what is returned in the PARAFORMAT structure.
Try some experiments by calling SetDefaultCharFormat() with various settings before calling StreamIn(). Try some more experiments by calling SetParaFormat() with various settings before calling StreamIn().
-- Dan
The CHARFORMAT2:
cbSize = 0x0000003c
dwMask = 0xf800003f
dwEffects = 0x4000000
yHeight = 0x000000c8
yOffset = 0x00000000
crTextColor = 0x00000000
bCharSet = 0x00
bPitchAndFamily = 0x00
szFaceName = "Courier"
-From some reason the extra attributes that belong to the CHARFORMAT2 are with junk values.
The PARAFORMAT2:
cbSize = 0x0000009c
dwMask = 0x8001003f
wNumbering = 0x0000
wReserved = 0x0000
dxStartInden = 0x00000000
dxRightInden = 0x00000000
dxOffset = 0x00000000
wAlignment = 0x0001
cTabCount = 0x0000
-About the other attributes that belong to the RichEdit2 - all with junk values
RE style: WS_CHILDWINDOW, VISIBLE,CLIPSIBLINGS, MAXIMIZE, VSCROLL, HSCROLL, OVERLAPPED, 000081C4
RE Ex-Style: WS_EX_LEFT, LTRREADING, RIGHTSCROLLBAR, ACCEPTFILES, CLIENTEDGE.
The junk fields are becasue MFC RichEdit supports RE 1.0 which uses the older layout. I hope that you are not trying that hack that attempts to use RichEdit 2.0 are you?
My settings were different. Default font was "MS San Serif" and yHeight was 0xa5. However, when I changed to your settings, my code ran just as fast as ever. You could try setting to use "Courier New" which is a TT font ("Courier" is not).
=-=-=-=-=-=-=-=-=-=-=-=-
If there is *anything* unusual about your code, then please start over with a AppWizard created Dialog-based app, add a RE control and procede with testing.
Another thing to check: As your program starts up, the debug trace window will show the modules that get loaded. On mine, I see:
Loaded 'C:\WINNT\system32\riched3
Loaded 'C:\WINNT\system32\riched2
And when I go to that directory and right-click the properties/version shows:
riched32.dll 5.0.2134.1 ("Rich Text Edit Control, v3.0"
Language: English (United States)
riched20.dll 5.30.23.1205 ("wrapper for dll richedit 1.0)
Language: English (United States)
=-=-=-=-=-=-
If yours are different, then you could try this: Go to the computer that runs quickly and copy these two dlls into your testing executable directory and verify that they get loaded (rather than the ones in the system dir). See if it makes any difference.
-- Dan
First of all, yes! I'm trying to use this hack that loads the Riched20.dll.
I wanted to color the background of the text and I needed the richedit 2.0 features.
But it doesn't work for some reason so I guess that I do not have the rich edit 2.0.
the versions I got:
riched32.dll 5.00.2134.1
Language: English (United States)
riched20.dll 5.30.23.1203
Language Neutral (???)
In the other computer the version of the riched20.dll was 5.30.23.1200 with the same language.
I have a good thing to tell you:
I managed to read the file very fast by changing the file opening to be binary and not text.
I got into this conclusion when I've opened a CPP file and I got it corrupted and that is (I assume) because there weren't any \r's.
In binary mode I'm loading in 1 or 2 seconds a file of 150 KB size.
Any way,
Thanks for everything and because of you're bothering I will give you the points you deserve.
Thanks again
Alon Shmuel
CFile cFile( "c:\\temp\\testfile.rtf", CFile::modeRead );
...you did something else? Just curious.
-- Dan
I added the CFile::modeBinary to this line and the file I'm opening is not an RTF but a CPP file.
It is very fast.
I had a problem when I wanted to color the tokens because that I missed the line feader, but I get over it by reducing from each token position an offset that is equals to the line I'm currently at.
if a file of 150kb was read in 11 seconds, now it takes 1 second!!! | https://www.experts-exchange.com/questions/20307134/Reading-data-from-a-big-file-to-CRichEditCtrl-is-very-slow.html | CC-MAIN-2018-13 | refinedweb | 3,210 | 75.71 |
punk thread. favorite punk albums?
mine is pic related. dean's dream is masterpiece 2bh
>>61767279
That's a great album OP, but if it's you're favorite punk album you're not really much of a punk
>pic kinda related, one of my many favorites
recently became one of my faves
>>61767363
yeah i know the dead milkmen is not that punk-y band even. i used to listen more hardcore punk put recently gotten into this mellow type shit.
you have any recs for me ? like similar to the dead milkmen or minutemen or something
>>61767387
good 'un
great fucking album op, what are some albums that have the same vibes?
>>61767410
>similar to the dead milkmen or minutemen
I was actually a punk in the late 80s and I knew kids that would kinda be on the fringes of the punk scene and cite these bands as their favorites - my friends and I kinda laughed at them for not being as punk as fuck as we were (i was a teen, shut up, it's cringey but true) but were never dicks because they usually had beer money ( but didn't drink much) and a car (and license and insurance!)
Anyhow, check out The Rezillos, they are fun as fuck and hard not like
and Angry Samoans who are also funny as fuck but a little harder edged
oh and also Vandals
also it seems like you'd like Descendants, Adolescents, Bad Religion - but they're not my cup of tea so get some reccs from someone else
>pic most likely unrelated to yuor tastes, but very related to mine
Here's an old but good introduction chart
Fucking glorious album
>>61767965
nice! will definitely check these out cheers
>>61768604
>tfw too old, too employed, too living in small upper plains oilfield town to wear crust pants any more
Should I sell my old pair to some oogle on ebay? I still have them, and the last time they were washed the twin towers were still a thing
I'll checking out some of these bands, nice
>>61768604
love these too man, proud to be a finn.
>>61768743
how old you dude, like, 40? i think you should rock your crust pants the first thing at work on monday and check reactions..
>>61768743
my friend, i am a young lil 20 summin. i wear my crust pants to work. only time not able is if weather prohibits it.
id rather see you donate them pants to a collector or somthing then see oogle scum in possession of such.
post pics, post fit.
>>61768763
you should be proud!
>>61768873
I work in the oilfield and so I gotta wear Fire Resistant overalls at work, but they get crusty as fuck (as do whatever I wear under them) within a few hours at work.
Seriously, I knew a guy who would smear black acyrlic paint on his new skinny jeans to get that unwashed for a year look - if only he knew how quick diesel/saltwater drilling mud will do the same thing
>pic related is co-worker
>>61768903
cba to posy pics, I gonna go to the bar soon, but I got two pairs, one with crusty illegible band patchs, and one thats predominately patched with leopard print and rubber cut from motorcycle inner tubes
size is 32 x 32 I think
>>61769137
i see you work with MC Ride, how cool is that! no but seriously, that's one dirty ass job you got there mate
>inb4 not real punk
You're just mad because they did it better and got rich off their music and your favorite "fuck le system" band didn't.
>>61769137
theres a thread in fa if you come on later, ive posted some personal pics as well ;)
>>>No.10838506
>>61769345
You're a faggot
>>61769345
check deez crust shorts cut offs
>>61769368
Thanks for proving my point :)
>>61768604
Cimex is GOAT punk
>>61769431
anti cimex are a personal favorite of mine, their early albums and EPs= classic
later nineties albums= bangers
not many 80s punk bands cant pull that off, most lose edge and sound like shit into the nineties and 2000s
personally i think absolute country of sweden is one of thier most solid albums
>>61769537
Scandinavian Jawbreaker is probably my favourite, I have the Svart records repress off all of their studio albums, comes with a bit of commentary on how the albums were developed and worked with.
>>61768604
>>61768903
thank you for these. great to see Sacrilege on that second chart, i love them. have you heard Death Evocation's self-titled 7"? sounds quite similar to Sacrilege but for some reason it's probably my favorite crust release ever
do you have others?
Crass - Feeding of the 5000
>>61769666
sacrilege is def underrated, especially for crust crossover with female vocals, listening to death evocation now as i have not heard it before,
if you mean more charts then unfortunately no, i made these for the last rounds of punk threads, was thinking about making a japcore one and a american punk charts of my own liking.
you need some specific recs?
>>61769666
def sounds alot like sacrilege, def diggin it man, thanks, do you know when Death Evocation S/T was released?
>>61767279
hahah forgot about these guys. Saw them at FYF couple years ago. dan's dream is p sweet.
gonna go dl this rn
>>61767410
>>61769895
will do, sir!
>>61769818
discogs says 2011.
no idea why they haven't released anything else. the quality of songwriting, playing, and band chemistry is waaaaaayyyy too good for a demo | http://4archive.org/board/mu/thread/61767279 | CC-MAIN-2016-44 | refinedweb | 942 | 71.68 |
Hi I have a small python gui interface with two buttons, start(That starts a counter) and stop (that is suppose to stop the counter), the counter is an infinite loop since I do not want it to end unless the second button is clicked. The problem is the second button cannot be clicked while the function from the first one is still running.
I read that I need to use threading and I have tried but I do not fully understand how I can do this. Please help.
from Tkinter import *
import threading
class Threader(threading.Thread):
def run(self):
for _ in range(10):
print threading.current_thread().getName()
def main(self):
import itertools
for i in itertools.count(1, 1):
print i
def other(self):
print "Other"
m = Threader(name="main")
o = Threader(name="other")
try:
'''From here on we are building the Gui'''
root = Tk()
'''Lets build the GUI'''
'''We need two frames to help sort shit, a left and a right vertical frame'''
leftFrame = Frame(root)
leftFrame.pack(side=LEFT)
rightFrame = Frame(root)
rightFrame.pack(side=RIGHT)
'''Widgets'''
'''Buttons'''
playButton = Button(leftFrame, text="Play", fg="blue", command=m.main)
stopButton = Button(rightFrame, text="Stop", fg="red", command=o.other)
playButton.pack(side=TOP)
stopButton.pack(side=BOTTOM)
root.mainloop()
except Exception, e:
print e
Here's a short example of using
threading. I took out your
other function and I don't know why your using
itertools here. I took that out as well and simply setup using a simple threading example.
A few things:
You setup using
threading.Thread as the base class for
Threader, but you never actually initialized the base class.
Whenever you use threading you generally want to define a
run method and then use
start() to start the thread. Calling
start() will call
run.
You need to use threading to prevent your GUI blocking, because tkinter is just one thread on a giant loop. So, whenever you have some long running process it blocks this thread until the current process is complete. That's why it's put in another thread. Python has something called the GIL, which prevent's true parallelization (I made up that word) since it only one thread can ever be used at a time. Instead, it uses time slicing, the GIL sort of "polls" between them to give the appearance of multiple tasks running concurrently. For true parallel processing you should use
multiprocessing.
In the below code I have used
self.daemon = True. Setting the thread to be a daemon will kill it when you exit the main program (In this case the Tk GUI)
from tkinter import * import threading, time class Threader(threading.Thread): def __init__(self, *args, **kwargs): threading.Thread.__init__(self, *args, **kwargs) self.daemon = True self.start() def run(self): while True: print("Look a while true loop that doesn't block the GUI!") print("Current Thread: %s" % self.name) time.sleep(1) if __name__ == '__main__': root = Tk() leftFrame = Frame(root) leftFrame.pack(side=LEFT) rightFrame = Frame(root) rightFrame.pack(side=RIGHT) playButton = Button(leftFrame, text="Play", fg="blue", command= lambda: Threader(name='Play-Thread')) stopButton = Button(rightFrame, text="Stop", fg="red", command= lambda: Threader(name='Stop-Thread')) playButton.pack(side=TOP) stopButton.pack(side=BOTTOM) root.mainloop() | https://codedump.io/share/MU8Ile4kVoez/1/python-tkinter-how-can-i-prevent-tkinter-gui-mainloop-crash-using-threading | CC-MAIN-2016-44 | refinedweb | 546 | 65.93 |
08 December 2010 02:50 [Source: ICIS news]
DUBAI (ICIS)--Kuwait Aromatics (KARO) is planning to restart its Shuaiba-based plant producing benzene and paraxylene (PX) on 12 December following an unexpected shutdown last Friday, said a company source late on Tuesday.
Power supply issues shut the unit producing 820,000 tonnes/year of PX and 400,000 tonnes/year of benzene on 3 December, prompting a declaration of a force majeure on PX supply.
The company tried to restart the facility but found leaks at two plant pipes, the source said. The problem is currently being fixed, he added.
An estimated 25,000 tonnes of PX and about 12,500 tonnes of benzene would be lost during the 10-day shutdown of the Shuaiba plant, he said.
KARO supplies its entire benzene supply from the plant to the downstream 450,000 tonne/year styrene monomer (SM) unit of Equate Petrochemical. Market sources said that the SM plant was also down, but this could not be confirmed.
KARO is a joint venture (JV) between ?xml:namespace>
PIC and KNPC each has a 40% stake in the JV, while QPIC owns the remaining 20%.
With additional reporting by Clive On | http://www.icis.com/Articles/2010/12/08/9417513/gpca-10-kuwait-aromatics-to-restart-shuaiba-plant-on-12-dec.html | CC-MAIN-2015-11 | refinedweb | 199 | 69.52 |
Chart Data
Chart data is stored in a data series model that contains information about the
visual representation of the data points in addition to their values. There are
a number of different types of series -
DataSeries,
ListSeries,
HeatSeries, and
RangeSeries.
List Series
The
ListSeries is essentially a helper type that makes the handling
of simple sequential data easier than with
DataSeries. The data
points are assumed to be at a constant interval on the X axis, starting from the
value specified with the pointStart property (default is 0) at
intervals specified with the pointInterval property (default is
1.0). The two properties are defined in the
PlotOptions for the
series.
The Y axis values are given as constructor parameters or using the
setData() method.
ListSeries series = new ListSeries( "Total Reindeer Population", 181091, 201485, 188105); PlotOptionsLine plotOptions = new PlotOptionsLine(); plotOptions.setPointStart(1959); series.setPlotOptions(plotOptions); conf.addSeries(series);
You can also add them one by one with the
addData() method.
If the chart has multiple Y axes, you can specify the axis for the series by its
index number using
setyAxis().
Generic Data Series
The
DataSeries can represent a sequence of data points at an
interval as well as scatter data. Data points are represented with the
DataSeriesItem class, which has x and y
properties for representing the data value. Each item can be given a category
name.
DataSeries series = new DataSeries(); series.setName("Total Reindeer Population"); series.add(new DataSeriesItem(1959, 181091)); series.add(new DataSeriesItem(1960, 201485)); series.add(new DataSeriesItem(1961, 188105)); series.add(new DataSeriesItem(1962, 177206)); // Modify the radius of one point series.get(2).getMarker().setRadius(20); conf.addSeries(series);
Data points are associated with some visual representation parameters: marker style, selected state, legend index, and dial style (for gauges). Most of them can be configured at the level of individual data series items, the series, or in the overall plot options for the chart. The configuration options are described in "Chart Configuration". Some parameters, such as the sliced option for pie charts is only meaningful to configure at item level.
Adding and Removing Data Items
New
DataSeriesItem items are added to a series with the
add() method. The basic method takes just the data item, but the
other method takes also two boolean parameters. If the updateChart
parameter is false, the chart is not updated immediately. This is
useful if you are adding many points in the same request.
The shift parameter, when true, causes removal of the first data point in the series in an optimized manner, thereby allowing an animated chart that moves to left as new points are added. This is most meaningful with data with even intervals.
You can remove data points with the
remove() method in the series.
Removal is generally not animated, unless a data point is added in the same
change, as is caused by the shift parameter for the
add().
Updating Data Items
If you update the properties of a
DataSeriesItem object, you need
to call the
update() method for the series with the item as the
parameter. Changing data in this way causes animation
of the change.
Range Data
Range charts expect the Y values to be specified as minimum-maximum value pairs.
The
DataSeriesItem provides
setLow() and
setHigh() methods to set the minimum and maximum values of a data
point, as well as a number of constructors that accept the values.
RangeSeries series = new RangeSeries("Temperature Extremes"); // Give low-high values in constructor series.add(new DataSeriesItem(0, -51.5, 10.9)); series.add(new DataSeriesItem(1, -49.0, 11.8)); // Set low-high values with setters DataSeriesItem point = new DataSeriesItem(); point.setX(2); point.setLow(-44.3); point.setHigh(17.5); series.add(point);
The
RangeSeries offers a slightly simplified way of adding ranged
data points, as described in Range Series.
Range Series
The
RangeSeries is a helper class that extends
DataSeries to allow specifying interval data a bit easier, with a
list of minimum-maximum value ranges in the Y axis. You can use the series in
range charts, as described in
"Area and
Column Range Charts".
For the X axis, the coordinates are generated at fixed intervals starting from the value specified with the pointStart property (default is 0) at intervals specified with the pointInterval property (default is 1.0).
Setting the Data
The data in a
RangeSeries is given as an array of minimum-maximum
value pairs for the Y value axis. The pairs are also represented as arrays. You
can pass the data using the ellipsis in the constructor or using
setData():
RangeSeries series = new RangeSeries("Temperature Ranges", new Double[]{-51.5,10.9}, new Double[]{-49.0,11.8}, ... new Double[]{-47.0,10.8}); conf.addSeries(series);
Data Provider Series
DataProviderSeries is an adapter for using a
DataProvider as a
DataSeries in a chart. Using
setPointName(),
setX(), and
setY() you can define which parts of the bean in the
DataProvider are used in the chart.
Let us consider an example, where we have a
DataProvider which provides items of type
Order.
The
Order class has
getDescription(),
getUnitPrice(), and
getQuantity() to be used for the chart:
public class Order { private String description; private int quantity; private double unitPrice; public Order(String description, int quantity, double unitPrice) { this.description = description; this.quantity = quantity; this.unitPrice = unitPrice; } public String getDescription() { return description; } public int getQuantity() { return quantity; } public double getUnitPrice() { return unitPrice; } public double getTotalPrice() { return unitPrice * quantity; } }
If we have a data provider containing a list of
Order instances:
// The data List<Order> orders = new ArrayList<>(); orders.add(new Order("Domain Name", 3, 7.99)); orders.add(new Order("SSL Certificate", 1, 119.00)); orders.add(new Order("Web Hosting", 1, 19.95)); orders.add(new Order("Email Box", 20, 0.15)); orders.add(new Order("E-Commerce Setup", 1, 25.00)); orders.add(new Order("Technical Support", 1, 50.00)); DataProvider<Order, ?> dataProvider = new ListDataProvider<>(orders);
We can display the data in a
Chart as follows:
// Create a chart and use the data provider Chart chart = new Chart(ChartType.COLUMN); Configuration configuration = chart.getConfiguration(); DataProviderSeries<Order> series = new DataProviderSeries<>(dataProvider, Order::getTotalPrice); configuration.addSeries(series);
To make the chart look nicer, we can add a name for the series and show the order description when hovering points:
series.setName("Order item quantities"); series.setX(Order::getDescription);
To show the description also as x axis labels, we need to set the x axis type to category as the labels are strings:
configuration.getxAxis().setType(AxisType.CATEGORY);
The result, with some added titles, is shown in Chart Bound to a
DataProvider.
DataProvider
Drill-Down
Charts allow drilling down from a chart to a more detailed view by
clicking an item in the top-level view. To enable the feature, you need to
provide a separate data series for each of the detailed views by calling the
addItemWithDrilldown() method. When the user clicks on a
drill-down item, the current series is animated into the linked drill-down
series. A customizable back button is provided to navigate back to the main
series, as shown in Detailed series after a drill-down.
To make use of drill-down, you need to provide the top-level series and all the
series below it beforehand. The data is transferred to the client-side at the
same time and no client-server communication needs to happen for the drill-down.
The drill-down series must have an identifier, set with
setId(),
as shown below.
DataSeries series = new DataSeries(); DataSeriesItem mainItem = new DataSeriesItem("MSIE", 55.11); DataSeries drillDownSeries = new DataSeries("MSIE versions"); drillDownSeries.setId("MSIE"); drillDownSeries.add(new DataSeriesItem("MSIE 6.0", 10.85)); drillDownSeries.add(new DataSeriesItem("MSIE 7.0", 7.35)); drillDownSeries.add(new DataSeriesItem("MSIE 8.0", 33.06)); drillDownSeries.add(new DataSeriesItem("MSIE 9.0", 2.81)); series.addItemWithDrilldown(mainItem, drillDownSeries);
Turbo Mode
Turbo mode is a feature that optimizes performance of charts with a large amount of data items.
If a series in the chart contains more data items than the configured turbo threshold, then turbo mode is automatically enabled.
The default value for the turbo threshold is
1000.
Turbo mode only works with specific types of series, and other series that are not compatible will not render correctly when their number of data items exceeds the configured threshold.
The following series are not compatible with turbo mode:
DataSeries, when adding one of the following series items:
BoxPlotItem
DataSeriesItem, when setting any other property than
xand
y
DataSeriesItem3d
DataSeriesItemBullet
DataSeriesItemTimeline
DataSeriesItemXrange
FlagItem
OhlcItem, when setting any other property than
x,
high,
low,
open,
WaterFallSum
HeatSeries
NodeSeries
RangeSeries
TreeSeries
The turbo threshold, which determines when the turbo mode is activated, can be configured in a series' or the chart’s plot options:
PlotOptionsSeries options = new PlotOptionsSeries(); options.setTurboThreshold(2000); series.setPlotOptions(options);
Turbo mode can be disabled by setting the turbo threshold to
0. | https://vaadin.com/docs/latest/components/charts/data | CC-MAIN-2022-27 | refinedweb | 1,483 | 55.74 |
The Target API defines how you interact with targets in your plugin. For example, you would use the Target API to read the
sources field of a target to know which files to run on.
The Target API can also be used to add new target types—such as adding support for a new language. Additionally, the Target API can be used to extend pre-existing target types.
v1 plugin author upgrading to the Target API?
These docs are written from the perspective of writing a brand new plugin using the Target API and v2 engine, rather than the perspective of already having a v1 plugin and writing bindings for your plugin. However, these docs are still relevant.
We recommend reading the docs in this order:
- Skim this "Concepts" page. The main difference from V1 targets is that fields are the most important part of the Target API. Rather than defining your fields in the
__init__()of your target, you create a new class for each field.
- Read "Creating new fields". The majority of your bindings will be creating fields for each custom target you have.
- Read "Creating new targets". This shows how to hook up the fields you created in the previous step and how to register your target in
register.py.
While writing your binding, run
./pants help my_custom_targetto check that everything looks right.
See here for an example of writing a binding.
Please message us on Slack if you have any questions or you would like help writing bindings! We are eager to help.
Targets and Fields - the core building blocks
Definition of target
As described in Targets and BUILD files, a target is a set of metadata describing some of your code.
For example, this BUILD file defines a
python_tests target.
python_tests( sources=['app_test.py'], compatibility='==3.7.*' timeout=120, )
Definition of field
A field is a single value of metadata belonging to a target.
In the above example,
sources,
compatibility, and
timeout are all fields.
Each field has a Python class that defines its BUILD file alias, data type, and optional settings like default values. For example:
from pants.engine.target import IntField, StringField class PythonInterpreterCompatibility(StringField): alias = "compatibility" class PythonTestsTimeout(IntField): alias = "timeout" default = 60
Precise definition of target: a combination of fields
Precisely, a target is a combination of fields, along with a BUILD file alias.
These fields should make sense together. For example, it does not make sense for a
python_library target to have a
haskell_version field.
In fact, it only takes 3 lines of code to create a new target:
from pants.engine.target import Dependencies, Sources, Target, Tags class CustomTarget(Target): alias = "custom_target" core_fields = (Sources, Dependencies, Tags)
Any unrecognized fields will cause an exception when used in a BUILD file.
Fields may be reused
Because fields are stand-alone Python classes, the same field definition may be reused across multiple different target types.
For example, most target types have the
sources field.
resources( name="files_tgt", sources=["demo.txt"], ) python_library( name="python_tgt", sources=["demo.py"], )
This gives you reuse of code (DRY) and is important for your plugin to work with multiple different target types, as explained below.
A Field-Driven API
Pants plugins do not care about specific target types; they only care that the target type has the right combination of field types that the plugin needs to operate.
For example, the Python autoformatter Black does not actually care whether you have a
python_library,
python_tests, or
custom_target target; all that it cares about is that your target type has the field
PythonSources.
Targets are only used to get access to the underlying fields through the methods
.has_field() and
.get():
if target.has_field(PythonSources): print("My plugin can work on this target.") timeout_field = target.get(PythonTestsTimeout) print(timeout_field.value)
This means when creating new target types, the fields you choose for your target will determine the functionality it has.
Customizing fields through subclassing
Often, you may like how a field behaves, but want to make some tweaks. For example, you may want to give a default value to the
Sources field.
To modify a pre-existing field, simply subclass it.
from pants.engine.target import Sources class JsonSources(Sources): default = ("*.json",)
The
Target methods
.has_field() and
.get() understand this subclass relationship, as follows:
>>> json_target.has_field(JsonSources) True >>> json_target.has_field(Sources) True >>> python_target.has_field(JsonSources) False >>> python_target.has_field(Sources) True
This subclass mechanism is key to how the Target API behaves:
- You can use subclasses of fields—along with
Target.has_field()— to filter out irrelevant targets. For example, the Black autoformatter doesn't work with any plain
Sourcesfield; it needs
PythonSources. The Python test runner is even more specific: it needs
PythonTestsSources.
- You can create custom fields and custom target types that still work with pre-existing functionality. For example, you can subclass
PythonSourcesto create
DjangoSources, and the Black autoformatter will still be able to operate on your target.
Updated about a month ago | https://www.pantsbuild.org/docs/target-api-concepts | CC-MAIN-2020-50 | refinedweb | 823 | 57.67 |
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
Spyware protection for free. For use on Windows XP SP2 and Windows Server 2003 SP1. Requires genuine Windows.
Download it here
The picture tells it all...
Introduction
A couple of days ago I received the following mail:
Hi Bart,I’m trying to add an install functionally that will just basically copy the whole CD content to a local C drive folder. Executing the launcher.exe in that local folder works fine but if I create a desktop shortcut that targets the launcher.exe produced a file not found error for cassinilight.dll. I was wondering if you have an idea in what location was the application looking for cassinilight.dll. Any help is highly appreciated.
The short answer is really short: probing. Probing is the technique employed by the CLR's assembly loader to find a dependent assembly based on searching for it in various folders. Strongly named assemblies (those you signed using an sn.exe generated public/private key pair) are being searched for in the GAC (and paths specified in codeBase configuration elements, see further; and the "standard locations"). Weakly named assemblies are also probed, by looking in the same folder as the application and in subfolders named after the dependent assemblies themselves. However, sometimes it's not that easy and you really want to see what's going on (a common problem is an assembly being loaded from the GAC while you have recompiled it to your bin\Debug folder in Visual Studio which leads to unexpected debugging results).
In this post, I'm showing you how to make a jumpstart with Fusion, assembly probing and the "Fusion log viewer" aka fuslogvw.exe. For the record, Fusion is the codename of the assembly loader component of the CLR which (you can still see this in the SSCLI source tree under sscli20\clr\src\fusion).
A faulty application
Right, let's create a plain simple demo to illustrate the principle. It's so simple it fits in one console window using "copy con" file generation:
The code is:
foo.cs (compile using csc /t:library foo.cs)public class Foo{}
bar.cs (compile using csc /t:library /r:foo.dll bar.cs)class Bar{ public static void Main() { new Foo(); }}
Now you should have two assemblies: foo.dll and bar.exe. Run the application, it should just run (although it doesn't do anything useful, it doesn't produce any errors either).
Time has come to make the app faulty. Create a subfolder called "oops" and move the foo.dll file to it. Now bar.exe will fail:
And guess what, you shouldn't even read my blog to find out what's wrong. The runtime is so kind to tell you to enable Fusion:
To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.
Setting up Fusion
Redundant info maybe, but here are two ways to enable Fusion.
For modern developers - PowerShell
PS C:\Users\Bart> new-itemproperty -path HKLM:\SOFTWARE\Microsoft\Fusion -n EnableLog -t Dword -va 1
PSPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\ Microsoft\FusionPSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\ MicrosoftPSChildName : FusionPSDrive : HKLMPSProvider : Microsoft.PowerShell.Core\RegistryEnableLog : 1
For command-line freaks - reg.exe
C:\temp>reg add HKLM\Software\Microsoft\Fusion /v EnableLog /t REG_DWORD /d 1The operation completed successfully.
For UI lovers - regedit.exe
You should be able to find out yourself :-).
Analyzing the problem
Run bar.exe again after you've enabled Fusion. This time you get a realm of information:
The most interesting portion is the last part:
LOG: Attempting download of new URL: Attempting download of new URL: Attempting download of new URL: Attempting download of new URL.
These are the locations where the system attempted to find the referenced assembly "foo". The "oops" folder isn't there obviously, so the probing operation fails.
Now run fuslogvw.exe and you should see the following log information:
If you double-click on the last line, you'll see the following in a browser window:
*** Assembly Binder Log Entry (12/10/2006 @ 12:16:56) ***The operation failed.Bind result: hr = 0x80070002. The system cannot find the file specified: Attempting download of new URL: All probing URLs attempted and failed.
Since there's no application configuration file to specify probing locations, the default probing process is used, effectively looking in the same folder as the application (see Appbase) and in a subfolder with the assembly name (without the extension, i.e. Appbase\AssemblyName, in our example c:\temp\foo). All logs end up in the IE temporary files cache, but you can override this in fuslogvw (or by setting registry entries):
where "c:\temp" is set in the LogPath REG_SZ value in the Fusion registry key. The logging info will end up in a subfolder called "NativeImage":
Setting a custom probing path
You can drive the probing mechanism by specifying probing paths in a configuration file. So create the following bar.exe.config file in the bar.exe folder (c:\temp on my system):
<?xml version="1.0"?><configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="oops" /> </assemblyBinding> </runtime></configuration>
Now bar.exe works fine again:
If you configure Fusion to log all binds to disk, like this:
you'll see log entries appear after re-running the bar.exe applicatin again:
This time with the following logging info:
*** Assembly Binder Log Entry (12/10/2006 @ 13:01:05) ***The operation was successful.Bind result: hr = 0x0. The operation completed successfully: Private path hint found in configuration file: oops.LOG: Using application configuration file: C:\temp\bar.exe.configLOG:: Assembly download was successful. Attempting setup of file: C:\temp\oops\foo.dllLOG: Entering run-from-source setup phase.LOG: Assembly Name is: foo, Version=0.0.0.0, Culture=neutral, PublicKeyToken=nullLOG: Binding succeeds. Returns assembly from C:\temp\oops\foo.dll.LOG: Assembly is loaded in default load context.
As you can see, the oops path is being probed and the assembly is found thanks to our "private path hint" in the configuration file.
Conclusion
Once the FileNotFoundExceptin might have been your worst nightmare (actually MissingMethodException should deserve that spot) but when things get bad, Fusion comes to the rescue. Happy probing!
Did you know the third version of the Visual Studio 2005 SDK was shipped about one month ago? I missed this one out for a while, so it's time to explore it right now. Check out Somasegar's WebLog too for information on it.
One great thing in there is the MPPG/MPLex parser/lexer twin (just like yacc/lex or bison/flex) sample that comes with the SDK. I saw this project (called GPPG) from Queensland University quite a while ago (after attending the Good For Nothing Compiler session at the PDC last year) and have been skimming over it because of my interest in compilers. The big difference with yacc/lex is the output language, which is C# instead of C.
As you might already know by now, I spend a lot of my free time on the "dark side", inside the runtime (cf. SSCLI), messing around with specs (cf. ECMA-335) and having fun with compiler technology (cf. some-personal-project-without-a-name). I'll try to blog about MPPG/MPLex experiences in the near future.
If this isn't something to spend your Sunday afternoon on ...
Bart De Smet - Ghent - 10/12/06 1:55 (deferred post)
Lately I've done some code review focusing on security and performance. One of my findings is presented in this blog post, concerning network performance when working with streams. This sample illustrates how to optimize network throughput when working with NetworkStream objects.
Basically, a NetworkStream is non-buffered by default. When dealing with small pieces of data at a time this is highly inefficient. It's much better to have some buffer that collects data for network submission. This can be accomplished using the BufferedStream class in the System.IO API.
A simple demo server: the byte sucker
For demo purposes I'm presenting you with a simple "byte sucker" server that listens on some port for an incoming TCP connection and sucks data from it till the socket is closed by the client. No multi-threading, just plain simple code:
using System;using System.IO;using System.Net;using System.Net.Sockets;class Srv{ public static void Main(string[] args) { Console.WriteLine("Byte sucker network server"); Console.WriteLine("--------------------------\n"); int port = -1; if (args.Length != 1 || !int.TryParse(args[0], out port)) { Console.WriteLine("Usage: srv.exe <port>"); return; } TcpListener srv = new TcpListener(IPAddress.Loopback, port); srv.Start(); while (true) { Console.Write("Listening... "); TcpClient clnt = srv.AcceptTcpClient(); NetworkStream ns = clnt.GetStream(); Console.WriteLine("Client connected."); Console.Write("Receiving data... "); while (ns.ReadByte() != -1) ; Console.WriteLine("Finished."); Console.WriteLine(); } }}
Optimizing a client
On to the real stuff: the client that submits data to the server. We'll assume a client that sends data on a byte-per-byte basis to the server. The straightforward way to do this is the following:
TcpClient clnt = new TcpClient("localhost", 1234);NetworkStream ns = clnt.GetStream();for (...; ...; ...) ns.WriteByte(...);
A better way to do this relies on a buffer:
TcpClient clnt = new TcpClient("localhost", 1234);NetworkStream ns = clnt.GetStream();BufferedStream bs = new BufferedStream(ns);for (...; ...; ...) bs.WriteByte(...);
The goal is to buffer data before passing it on to the underlying stream, in casu the NetworkStream.
Here's the full code:
using System;using System.Diagnostics;using System.IO;using System.Net.Sockets;class Buffer{ public static void Main(string[] args) { Console.WriteLine("Buffered network traffic demo"); Console.WriteLine("-----------------------------\n"); int port = -1; if (args.Length != 1 || !int.TryParse(args[0], out port)) { Console.WriteLine("Usage: buffer.exe <port>"); return; } Console.Write("Connecting on port {0}... ", port); Stopwatch watch = new Stopwatch(); TcpClient clnt = new TcpClient("localhost", 1234); NetworkStream ns = clnt.GetStream(); Console.WriteLine("Connected."); Console.WriteLine(); Console.Write("Sending non-buffered data... "); Random rand = new Random(); watch.Start(); for (int i = 0; i < 1000000; i++) ns.WriteByte((byte)rand.Next(255)); watch.Stop(); Console.WriteLine("Done."); Console.WriteLine("Non-buffered: {0}", watch.Elapsed); Console.WriteLine(); Console.Write("Sending buffered data... "); BufferedStream bs = new BufferedStream(ns); watch.Reset(); watch.Start(); for (int i = 0; i < 1000000; i++) bs.WriteByte((byte)rand.Next(255)); watch.Stop(); Console.WriteLine("Done."); Console.WriteLine("Buffered: {0}", watch.Elapsed); ns.Close(); clnt.Close(); }}
Test time
Compile both apps and run in two command prompts (start srv.exe first and then buffer.exe), or use "start" to launch the apps in a separate window. Needless to say the port parameter should be the same (e.g. 1234). The server output isn't of any particular interest, but the client output is:
Buffered network traffic demo-----------------------------
Connecting on port 1234... Connected.Sending non-buffered data... Done.Non-buffered: 00:00:06.1217484
Sending buffered data... Done.Buffered: 00:00:01.2173279
A factor 5 faster with buffering. Great isn't it? One little remark: you can specify an additional parameter to the BufferedStream's constructor, to indicate the buffer size in bytes. Play around with this setting to find out about the right balance for your app.
Keep it fast!
Simple question today: "How to change the password of a SQL Server account programmatically using .NET?". The answer: Microsoft.SqlServer.Management.Common.
Create a simple Console Application project and add a reference to the Microsoft.SqlServer.ConnectionInfo.dll assembly (should be listed on your machine if you've installed any flavor of SQL Server 2005).
Here's the code of our password change tool:
using System;
using System.Security;
using System.Runtime.InteropServices;
using Microsoft.SqlServer.Management.Common;
namespace SqlResetPwd
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Reset SQL Server password");
Console.WriteLine("-------------------------\n");
Console.Write("Server name: "); string instance = Console.ReadLine();
Console.Write("User name: "); string user = Console.ReadLine();
Console.Write("Password: "); SecureString pwd = AskPassword();
Console.WriteLine(); Console.WriteLine();
Console.Write("Trying to connect... ");
ServerConnection conn = new ServerConnection(instance, user, pwd);
try
{
conn.Connect();
Console.WriteLine("Connected."); Console.WriteLine();
conn.Disconnect();
conn = new ServerConnection(instance, user, pwd);
SecureString newPwd, conPwd;
while (true)
{
Console.Write("New password: "); newPwd = AskPassword(); Console.WriteLine();
Console.Write("Confirm: "); conPwd = AskPassword(); Console.WriteLine();
if (!Match(newPwd, conPwd))
{
Console.WriteLine("The specified passwords do not match. Please try again.");
}
else
{
try
{
conn.ChangePassword(newPwd);
break;
}
catch (Exception ex)
{
Console.WriteLine("Failed to change password. " + ex.Message);
}
}
}
Console.WriteLine();
Console.WriteLine("Password changed successfully.");
}
catch (ConnectionFailureException ex)
{
Console.WriteLine((ex.InnerException != null ? ex.InnerException.Message : "Failed."));
}
finally
{
conn.SqlConnectionObject.Close();
}
}
static SecureString AskPassword()
{
SecureString pwd = new SecureString();
while (true)
{
ConsoleKeyInfo i = Console.ReadKey(true);
if (i.Key == ConsoleKey.Enter)
{
break;
}
else if (i.Key == ConsoleKey.Backspace)
{
pwd.RemoveAt(pwd.Length - 1);
Console.Write("\b \b");
}
else
{
pwd.AppendChar(i.KeyChar);
Console.Write("*");
}
}
return pwd;
}
unsafe static bool Match(SecureString s1, SecureString s2)
{
if (s1.Length != s2.Length)
return false;
IntPtr bs1 = Marshal.SecureStringToBSTR(s1);
IntPtr bs2 = Marshal.SecureStringToBSTR(s2);
char* ps1 = (char*) bs1.ToPointer();
char* ps2 = (char*) bs2.ToPointer();
try
{
for (int i = 0; i < s1.Length; i++)
if (ps1[i] != ps2[i])
return false;
return true;
}
finally
{
if (IntPtr.Zero != bs1)
Marshal.ZeroFreeBSTR(bs1);
if (IntPtr.Zero != bs2)
Marshal.ZeroFreeBSTR(bs2);
}
}
}
}
A few interesting things to mention:
For more information on SecureString, see my "Talking about System.Security.SecureString" blog post too.
Have fun!
Interesting site -. Just like other browsers support add-ons, IE7 does (older versions of the browser did too, through Windows Marketplace). I hope people will find it as hot as the ones of competing browsers. Just check it out!
Need to say more? Download here: or via the redir website. (Or you can wait for Microsoft Update to install it for you within a few weeks from now.) For information on language support, see this.
Install 7 now :-)!
Gert Drapers, development manager for DataDude and SQL guru for life, announces the availability of CTP6 on his blog. Also notice the RTM release is scheduled for Q1 2007.
For the Belgians not attending TechEd IT Forum (most developers don't I guess, but there are exceptions like me), Gunther Beersaerts will deliver an MSDN Evening Session on November 15, 2006: Introducing Visual Studio Team Edition for Database Professionals.
Time to become datadudes, don't we? | http://community.bartdesmet.net/blogs/bart/archive/2006/10.aspx?PageIndex=2 | CC-MAIN-2015-35 | refinedweb | 2,402 | 52.15 |
No, I should say forms particularly.
I have lots of things to blog about, but nothing makes me want to blog like code. Ideas are hard, code is easy. So when I saw Jacob’s writeup about dynamic Django form generation I felt a desire to respond. I didn’t see the form panel at PyCon (I intended to but I hardly saw any talks at PyCon, and yet still didn’t even see a good number of the people I wanted to see), but as the author of an ungenerator and as a general form library skeptic I have a somewhat different perspective on the topic.
The example created for the panel might display that perspective. You should go read Jacob’s description; but basically it’s a simple registration form with a dynamic set of questions to ask.
I have created a complete example, because I wanted to be sure I wasn’t skipping anything, but I’ll present a trimmed-down version.
First, the basic control logic:
from webob.dec import wsgify from webob import exc from formencode import htmlfill @wsgify def questioner(req): questions = get_questions(req) # This is provided as part of the example if req.method == 'POST': errors = validate(req, questions) if not errors: ... save response ... return exc.HTTPFound(location='/thanks') else: errors = {} ## Here's the "form generation": page = page_template.substitute( action=req.url, questions=questions) page = htmlfill.render( page, defaults=req.POST, errors=errors) return Response(page) def validate(req, questions): # All manual, but do it however you want: errors = {} form = req.POST if (form.get('password') and form['password'] != form.get('password_confirm')): errors['password_confirm'] = 'Passwords do not match' fields = questions + ['username', 'password'] for field in fields: if not form.get(field): errors[field] = 'Please enter a value' return errors
I’ve just manually handled validation here. I don’t feel like doing it with FormEncode. Manual validation isn’t that big a deal; FormEncode would just produce the same errors dictionary anyway. In this case (as in many form validation cases) you can’t do better than hand-written validation code: it’s shorter, more self-contained, and easier to tweak.
After validation the template is rendered:
page = page_template.substitute( action=req.url, questions=questions)
I’m using Tempita, but it really doesn’t matter. The template looks like this:
<form action="{{action}}" method="POST"> New Username: <input type="text" name="username"><br> Password: <input type="password" name="password"><br> Repeat Password: <input type="password" name="password_confirm"><br> {{for question in questions}} {{question}}: <input type="text" name="{{question}}"><br> {{endfor}} <input type="submit"> </form>
Note that the only “logic” here is to render the form to include fields for all the questions. Obviously this produces an ugly form, but it’s very obvious how you make this form pretty, and how to tweak it in any way you might want. Also if you have deeper dynamicism (e.g., get_questions start returning the type of response required, or weird validation, or whatever) it’s very obvious where that change would go: display logic goes in the form, validation logic goes in that validate function.
This just gives you the raw form. You wouldn’t need a template at all if it wasn’t for the dynamicism. Everything else is added when the form is “filled”:
page = htmlfill.render( page, defaults=req.POST, errors=errors)
How exactly you want to calculate defaults is up to the application; you might want query string variables to be able to pre-fill the form (use req.params), you might want the form bare to start (like here with req.POST), you can easily implement wizards by stuffing req.POST into the session to repeat a form, you might read the defaults out of a user object to make this an edit form. And errors are just handled automatically, inserted into the HTML with appropriate CSS classes.
A great aspect of this pattern if you use it (I’m not even sure it deserves the moniker library): when HTML 5 Forms finally come around and we can all stop doing this stupid server-side overthought nonsense, you won’t have overthought your forms. Your mind will be free and ready to accept that the world has actually become simpler, not more complicated, and that there is knowledge worth forgetting (forms are so freakin’ stupid!) If at all possible, dodging complexity is far better than cleverly responding to complexity. | http://www.ianbicking.org/blog/2010/03/throw-out-your-frameworks-forms-included.html | CC-MAIN-2019-43 | refinedweb | 739 | 54.73 |
> blob-2.0.5-pre2.rar > README
(emacs users can use -*- outline -*- mode for this file) $Id: README,v 1.7 2002/01/07 20:16:49 erikm Exp $ * Info ====== ** What is blob? ---------------- Blob is the Boot Loader OBject, the boot loader for the LART. Blob is able to boot a Linux kernel stored in flash or RAM and provide that kernel with a ramdisk (again from flash or RAM). Blob is copyrighted by Jan-Derk Bakker and Erik Mouw. Blob is released with a slightly modified GNU GPL license: we don't consider the operating systems that blob boots as a derived work. Later on, several other people also contributed to blob. Blob started its life as a boot loader for the LART, but nowadays it has been ported to the Intel Assabet SA-1110 evaluation platform, the Intel Brutus SA-1100 evaluation platform, the PLEB board, the Nesa board, the TuxScreen (aka Shannon), and to the CreditLART board. ** Where is the latest blob source available? --------------------------------------------- The latest and greatest blob source is available from SourceForge, see . The latest source is available from anonymous CVS. First log in to the CVS server: cvs -d:pserver:anonymous@cvs.blob.sourceforge.net:/cvsroot/blob login There is no password, so just press enter. Now check out the blob source: cvs -z3 -d:pserver:anonymous@cvs.blob.sourceforge.net:/cvsroot/blob co blob If you're using the blob CVS source, it's a good idea to subscribe to the blob-cvs-commit mailing list so you know about blob patches. See . The general blob discussion is done on the LART mailing list, see for more information. There is also a blob IRC channel: log on to irc.openprojects.net and join #blob. Note that this is a strictly on-topic development IRC channel, not a general blob help channel. Blob even has a home page: . ** So what is LART? ------------------- LART is the Linux Advanced Radio Terminal, a small low power computing element used in the MMC project and the Ubiquitous Communications programme (see,, and ). LART features: - 10x7.5 cm (that's 4x3 inch in Stonehenge Units) - 220 MHz Digital StrongARM SA-1100 CPU - 4 Mbyte flash memory - 32 Mbyte DRAM memory - Low power: peak power consumption is 1W * Building blob =============== ** Prerequisites ---------------- - A native ARM/StrongARM Linux system with gcc 2.95.2, and binutils 2.9.5.0.22 or better - Or any UNIX system with cross compiling binutils 2.9.5.0.22 (or better) and gcc 2.95.2 installed - GNU make (although some vendor supplied make utilities will do) - GNU autoconf and automake (if you build blob from CVS) - tools are that good that we don't think that a sun-sparc-solaris to arm-linux cross compiler will fail. ** Generating configure and Makefiles ------------------------------------- This step is only necessary if you build blob from CVS. - Run "tools/rebuild-gcc" ttwwiiccee ** Configuring and compiling the package ---------------------------------------- With a cross compiler tool chain (using tcsh as shell): - setenv CC /path/to/cross/gcc - setenv OBJCOPY /path/to/cross/objcopy - Run "./configure --with-linux-prefix=/path/to/armlinux/source \ --with-board=boardname arm-unknown-linux-gnu" There are currently a couple of valid board names, choose from: assabet, brutus, creditlart, lart, nesa, pleb, or shannon. If the board name is ommited, lart will be chosen as a default. If you want to do some serious hacking on blob, consider using the "- "setenv FOO bar". With a native ARM Linux tool chain: - Run "./configure --with-board=boardname" - Run "make" The binary image is in src/blob; src/blob-start-elf32 and src/blob-rest-elf32 are the two parts of the images with complete ELF headers. To disassemble "blob-start-elf32", use: arm-linux-objdump -D -S blob-start-elf32 To see the actual hex codes of blob, use: od -t x4 blob ** Installing ------------- *** LART -------- The current wisdom to install blob on a LART is: - Connect the JTAG dongle to the LART - Connect the other end of the JTAG dongle to the parallel port of your PC - Power up the LART - Use the jflash utility (available from the LART web site) to write blob (you usually need to be root for this): jflash blob The JTAG flash burn code however is now worked out as a set of Linux executables provided by the JTAG flash project located at the LART page as well as JTAG executables ported to support the TuxScreen screen phone. The LART project initially used the following wisdom to install blob: Required hardware & software: - The LART itself with 4 Mbyte flash memory - An external 128 kbyte flash board - A PCI 7200 (???) digital I/O card with a Linux driver - A flash burn program for this I/O card The external flash board is connected to the PCI 7200 card and blob is written into the flash memory using the flash burn program. The external flash board is connected to the LART low speed interface. The external flash chip is mapped at address 0x00000000, and the internal flash is re-mapped at 0x08000000. As soon as the LART boots, the external flash is copied to the first 128 kbyte of the internal flash. The next time the LART is started without external flash board, it starts from its internal flash which now contains the just downloaded blob. Why this strange way to download blob? We first tried to use the SA-1100 JTAG interface to program the flash directly, but soon found out that it would take weeks to write a decent JTAG tool because JTAG is a real brain-damaged protocol (it was designed by a committee, need we say more?). To meet a deadline, we decided to make a special board with 128 kbyte external flash memory (and an LCD interface). *** Assabet ----------- (From Justin Seger:) The best way is to use the JTAG cable: - Connect the JTAG cable from the Assabet to your hosts parallel port - Power up the Assabet - Use the jflash utility to write blob: jflash-linux blob - Power cycle the Assabet; you should see the the bootloader starting up with the output on the serial port. *** SHANNON (TuxScreen web phone) ----------- The Shannon comes with Inferno () installed on it. You can DL and install a hosted version of Inferno and then use the sboot remote interface to install blob for the first time. Afterwards blob can reinstall itself. Alternately, you can use JTAG hardware to do the install if you have this equipment. ** Making a distribution ------------------------ This is only needed when you want to make a tar file from the current blob sources. - First configure the package - Run "make dist" * Using blob ============ ** Booting ---------- First connect a terminal (or a terminal emulator like miniterm or Seyon) to the serial port. Use the following settings for your terminal: 9600 baud, 8 data bits, no parity, 1 stop bit, no start bits (9600 8N1, a pretty standard setting for Unix systems). If possible, use VT100 terminal emulation. Switch on the power to the SA-11x0 board. The board should respond with: Consider yourself LARTed! blob version 2.0.3 Copyright (C) 1999 2000 2001 Jan-Derk Bakker and Erik Mouw Copyright (C) 2000 Johan Pouwelse blob comes with ABSOLUTELY NO WARRANTY; read the GNU GPL for details. This is free software, and you are welcome to redistribute it under certain conditions; read the GNU GPL for details. Memory Map: 0x08000000 @ 0xC0000000 (8MB) 0x08000000 @ 0xC1000000 (8MB) 0x08000000 @ 0xC8000000 (8MB) 0x08000000 @ 0xC9000000 (8MB) Loading blob from flash . done Loading kernel from flash ....... done Loading ramdisk from flash ............... done Autoboot in progress, press any key to stop ... If you don't press a key within 10 seconds, blob will automatically start the Linux kernel: Starting kernel ... Uncompressing Linux...done. Now booting the kernel ... However, if you press the
key, you will get the blob prompt: Autoboot aborted Type "help" to get a list of commands blob> ** Commands ----------- Blob knows several commands, typing "help" (without the ") will show you which: Help for blob 2.0.3, the LART bootloader The following commands are supported: * boot [kernel options] Boot Linux with optional kernel options * clock PPCR MDCNFG MDCAS0 MDCAS1 MDCAS2 Set the SA1100 core clock and DRAM timings (WARNING: dangerous command!) * download {blob|kernel|ramdisk} Download blob/kernel/ramdisk image to RAM * flash {kernel|ramdisk} Copy blob/kernel/ramdisk from RAM to flash * help Get this help * reblob Restart blob from RAM * reboot Reboot system * reload {blobkernel|ramdisk} Reload blob/kernel/ramdisk from flash to RAM * reset Reset terminal * speed Set download speed * status Display current status *** "boot" ---------- Boot the Linux kernel. You can supply extra parameters to the Linux kernel; if you don't, the kernel will use it's default command line. Blob will respond with: blob> boot Starting kernel ... Uncompressing Linux...done. Now booting the kernel ... *** "clock" ----------- This an experimental command to set the SA1100 core clock and DRAM timings. We've used it to test clock scaling. This command writes the exact values supplied on the command line to the PPCR, MDCNFG, MDCAS0, MDCAS1, and MDCAS2 registers, but it doesn't check the validity of the values. Example (that will crash your system for sure): blob> clock 0x11111111 0x22222222 0x33333333 0x44444444 0x55555555 WARNING: This command is DANGEROUS and HIGHLY EXPERIMENTAL. Don't use it unless you have a VERY thorough understanding on the inner workings of the SA1100 CPU! It works for us, YMMV. If it breaks your CPU, don't say that we didn't warn you! *** "download" -------------- Download a uuencoded blob, kernel, or ramdisk to RAM. This command needs an extra parameter: "blob", "kernel", or "ramdisk". Blob will respond with: blob> download kernel Switching to 115200 baud You have 60 seconds to switch your terminal emulator to the same speed and start downloading. After that blob will switch back to 9600 baud. Switch your terminal emulator to the indicated speed and start downloading the kernel or ramdisk. With minicom, you can use the ASCII download method, or use another shell to download the file: uuencode zImage zImage > /dev/ttyS1 Of course, use the correct serial port. If the download is successful, blob will respond with: (Please switch your terminal emulator back to 9600 baud) Received 65536 (0x00010000) bytes. If an error occurs during downloading, blob will respond with: (Please switch your terminal emulator back to 9600 baud) *** Uudecode receive failed A failed download session can have several reasons: the file is too big, the download speed too high (see the "speed" command), or the uuencoded file to be downloaded is corrupt. Correct the error and retry. A downloaded kernel or ramdisk can be written to flash with the "flash" command, or it can directly be used to boot with the "boot" command. *** "flash" ----------- Write blob, kernel, or ramdisk from RAM to flash memory. This command needs an extra parameter: "blob", "kernel" or "ramdisk". Blob will respond with: blob> flash kernel Saving kernel to flash ..... .... done This won't work on all architectures, check the RELEASE-NOTES. *** "reblob" ------------ Restart blob from RAM. This is mainly useful if you are working on blob itself because it allows you to download blob and immediately start it without having to burn it to flash first. *** "reboot" ------------ This command simply reboots the system. *** "reload" ------------ Reload blob, kernel, or ramdisk from flash memory to RAM. This command needs an extra parameter: "blob", "kernel", or "ramdisk". Blob will respond with: blob> reload kernel Loading kernel from flash ....... done The "reload command" will overwrite a just downloaded a kernel or ramdisk. *** "reset" ----------- Reset the terminal. This command will write the VT100 reset sequence (Esc-c) to the terminal. Useful if you forgot to switch your terminal emulator back to 9600 baud after downloading a kernel or ramdisk. *** "speed" ----------- Set the download speed. This command needs a download speed value as an extra parameter. Valid values are: 1200, 9600, 19200, 38400, 57600, and 115200. Blob will respond with: blob> speed 19200 Download speed set to 19200 baud *** "status" ------------ Show the current status. Blob will respond with: blob> status Bootloader : blob Version : 2.0.3 Running from : internal flash Blocksize : 0x00800000 Download speed: 115200 baud Blob : from flash Kernel : downloaded, 424333 bytes Ramdisk : from flash Depending on what commands you used before, these values may or may not be different. * Porting blob ============== Porting blob to a new SA11x0 platform is quite easy and consist of four steps: 1. Define the features of the architecture 2. Write some architecture specific code 3. Test the new architecture 4. Submit the patch The next couple of paragraphs describe the process of porting blob to the "foobar" platform. ** Define the architecture in configure.in ------------------------------------------ First you need two know a couple of things: the name of the board, what kind of CPU the board uses (SA1100 or SA1110), whether it has LCD support, and the name of the platform obj and flash obj. Let's assume the foobar platform has an SA1100 CPU, no use LCD, and platform obj and flahs obj are foobar.o. The correct lines for configure.in will be: foobar) board_name="Foobar Board" AC_DEFINE(FOOBAR) BLOB_PLATFORM_OBJ="foobar.o" BLOB_FLASH_OBJS="foobar.o" use_cpu="sa1100" use_lcd="no" ;; Put this just after the CreditLART definition. ** Define the architecture in acconfig.h ---------------------------------------- Because configure.in was instructed to define FOOBAR for the foobar platform, we have to define the symbol in acconfig.h as well. Add the following two lines to acconfig.h, just after the CLART define: /* Define for foobar boards */ #undef FOOBAR ** Update the build system -------------------------- Run the following commands to update the configure script, include/config.h.in, and the Makefile.in files: tools/rebuild-gcc tools/rebuild-gcc (yes, twice) ** Configure blob ----------------- Configure blob for the new foobar architecture: setenv CC arm-linux-gcc ./configure --with-linux-prefix=/path/to/armlinux/source \ --with-board=foobar --enable-maintainer-mode \ --enable-blob-debug arm-unknown-linux-gnu We're using maintainer-mode and debug information to help the port. See the section about "Configuring and compiling the package" for general information. ** Select correct clock speed ----------------------------- Open src/blob/start.S in an editor, and add a line to select the correct clock speed (just before the SHANNON definition): #if defined FOOBAR cpuspeed: .long 0x09 /* 190 MHz */ ** Edit memory settings ----------------------- Edit src/blob/memsetup-sa1100.S or src/blob/memsetup-sa1110.S, and add the correct memory setting for the foobar architecture. Add these (example) settings right before the PLEB definitions: #if defined FOOBAR mdcas0: .long 0x1c71c01f mdcas1: .long 0xff1c71c7 mdcas2: .long 0xffffffff mdcnfg: .long 0x0334b21f #endif Note that the SA1110 memory settings are not as modular as the SA1100 settings, so you'll have to use your imagination over there to get proper memory settings. Right now, the basic blob functionality is ported to your board and you should be able to compile blob by running "make". ** Edit LED defines ------------------- If your board has a LED on a GPIO pin, edit include/blob/led.h in an editor to switch it on early in the boot stage. Let's assume the foobar board has the LED on GPIO pin 1, so add the following lines just before the PLEB definition: #elif defined FOOBAR # define LED_GPIO 0x00000002 /* GPIO 1 */ ** Compile blob --------------- Now compile blob by running: make If everything went right, you have a new blob binary in src/blob. ** Test blob ------------ You are now ready to flash blob to your board and test it. If something goes wrong in the early boot process, blob will flash the LED (that's why you should always have a LED on your board), or not work at all. As soon as you get character on the serial port the most difficult part is done and you should be ready to port arm-linux to your board. ** Submit the patch ------------------- First run "make distclean" in your blob tree so you'll get a clean source tree. Now rename your current tree and untar the original blob source (assuming that you're hacking on blob-2.0.3): cd .. mv blob-2.0.3 blob-2.0.3-foobar gzip -dc blob-2.0.3.tar.gz | tar xvf - Diff the two trees and create a patch file: diff -urN blob-2.0.3 blob-2.0.3-foobar > foobar.diff Now send the patch to me (erikm@users.sourceforge.net) and be sure to CC a couple of other blob developers (for the current list of blob developers, see ). The best way to send the patch is to attach it as plain text to your message because in that way email clients have less chance to corrupt the patch. | http://read.pudn.com/downloads101/sourcecode/unix_linux/415580/blob-2.0.5-pre2/README__.htm | crawl-002 | refinedweb | 2,765 | 63.09 |
This project aims to provide easy access to CD-ROM properties and functions. This is a class library for .NET and is written in C# language. It has only one DLL and you can access the CD-ROM with only one line code.
This is very easy. Go to your destination project and add DLL as a reference, in this way -> project menu//add reference //Browse Tab// and locate DLL, then import it:
using CdRom;
Then, with this code declare a new class:
CDRom cd_rom = new CDRom();
For catching the name of CD-ROMs and printing in Listbox:
Listbox
foreach(string temp in cd_rom.CDRomName)
listBox1.Items.Add(temp);
Name of all drives:
foreach (string temp in cd_rom.DriveName)
listBox1.Items.Add(temp);
A bool that indicates whether CD is inserted or not:
cd_rom.CdinCDRom;
If CD inserted == true, then name(Label) of CD is:
inserted == true
name(Label)
cd_rom.CdName;
And finally, string indicates the drive of CD-ROM:
string
cd_rom.CdDriveName;
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/34604/CD-Rom-DLL-for-easy-access-to-CD-Rom | crawl-003 | refinedweb | 182 | 64.91 |
log1p, log1pf, log1pl − logarithm of 1 plus argument
#include <math.h>
double
log1p(double x);
float log1pf(float x);
long double log1pl(long double x);
Link with −lm.
log (1 + x)
It is computed in a way that is accurate even if the value of x is near zero.
On success, these functions return the natural logarithm of (1 + x).
If x is a NaN, a NaN is returned.
If x is positive infinity, positive infinity is returned.
If x is.
exp(3), expm1(3), log(3)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/. | http://man.linuxtool.net/centos7/u3/man/3_log1p.html | CC-MAIN-2019-30 | refinedweb | 116 | 69.18 |
On Tue, 15 Mar 2005, Nicola Whitehead wrote: (snip) > term :: Parser Int > term = do f <- factor > do symbol "*" > e <- expr > return (f * t) > +++ return f (snip) > symbol and natural are defined elsewhere and work fine, but when I compile it I get the error > > ERROR "C:/HUGS/Calculator.hs":66 - Undefined variable "t" > > I suspect I'm missing something obvious, but for the life of me I can't see it. Any suggestions? (snip) You are missing something obvious. (-: "t" appears indeed to be undefined in "term". Did you mean "return (f * e)"? Variables (although why they're called that in Haskell I'm not sure) defined with <- in "do" are only in scope later in that "do", not anywhere else. Mark -- Haskell vacancies in Columbus, Ohio, USA: see | http://www.haskell.org/pipermail/haskell-cafe/2005-March/009370.html | CC-MAIN-2014-42 | refinedweb | 129 | 70.84 |
Paging Alex Tereshenkov! I'm trying to run your script located here:
Get data sources that are used by ArcGIS Server map services with Python | Tereshenkov's Blog
but I'm getting an error when I get to here:
def get_connection(data):
'''return database connection string for the service'''
return data['SVCManifest']['Databases']['SVCDatabase']['OnPremiseConnectionString']
Error:
TypeError: list indices must be integers, not str
Any ideas? This looks very useful!
Does the service have multiple layers using different connections? I'm not sure of the format of the JSON that's returned, but there's a list somewhere in there that you'll need to loop through. I would print the data variable and see what it stores. | https://community.esri.com/thread/186823-get-data-sources-that-are-used-by-arcgis-server-map-services-with-python-alex-tereshenkov | CC-MAIN-2018-43 | refinedweb | 118 | 61.97 |
I have been searching for a while now on how to import data in a SharePoint list into Excel 2007 that was similar to XML data import in Excel 2003. I can bring in the data by Exporting to Spreadsheet from SharePoint and writing the VBA to connect to
the list but I am finding a major problem that I know there must be a solution for.
Export to spreadsheet from SharePoint from a public view works like a charm, however I don't want all my users to have views that display all the columns however certain fields are used for Excel reporting. If I try and export to spreadsheet from a
personal view with the columns I need, use the same VBA code then I can bring in the data no problem (which would make sense) but other users cannot bring in the data (which also makes sense).
Anyone know what the work around for this is? I can't imagine Microsoft would kill the same function 2003 had with XML --> Data which allowed for any columns to be brought it.
Keep in mind I am not concerned with two-way synchronization for this. The article I used to first bring in the data was:.
Any help is greatly appreciated.
HI,
Instead of making this complex I would suggest you to create a public view with fields you want for your public users.
I hope this will help you out.
Thanks, Rahul Rashu
Hi Rahul,
Thank you for the prompt response. My intent is not to have a public view for a few reasons:
1. There could be as many as 70 columns that need to be imported into Excel and would prefer not to have a public view that is unnecessary (nobody is going to want to scroll horizontally that much).
2. There are some columns that I would not want visible to all public users.
Based on your response, is it safe to safe that Microsoft 2007+ can only
import SharePoint data based on a view from a list and not specified columns from a list like in 2003?
I'm guessing this is why Excel Services was created. | https://social.technet.microsoft.com/Forums/office/en-US/f8c01e86-82e4-4e2e-9329-1df385b42af4/importing-sharepoint-data-into-excel-2007?forum=sharepointgenerallegacy | CC-MAIN-2020-34 | refinedweb | 363 | 75.44 |
X509_new.3ossl - Man Page
X509 certificate ASN1 allocation functions
Synopsis
#include <openssl/x509.h> X509 *X509_new(void); X509 *X509_new_ex(OSSL_LIB_CTX *libctx, const char *propq);_ex() allocates and initializes a X509 structure with a library context of libctx, property query of propq and a reference count of 1. Many X509 functions such as X509_check_purpose(), and X509_verify() use this library context to select which providers supply the fetched algorithms (SHA1 is used internally). This created X509 object can then be used when loading binary data using d2i_X509().
X509_new() is similar to X509_new_ex() but sets the library context and property query to NULL. This results in the default (NULL) library context being used for any X509 operations requiring algorithm fetches., or an empty stack if a is NULL.)
History
The function X509_new_ex() was added in OpenSSL 3), migration_guide.7ossl_sign.3ossl(3), X509V3_get_d2i.3ossl(3).
The man pages X509_chain_up_ref.3ossl(3), X509_free.3ossl(3), X509_new_ex.3ossl(3) and X509_up_ref.3ossl(3) are aliases of X509_new.3ossl(3). | https://www.mankier.com/3/X509_new.3ossl | CC-MAIN-2022-21 | refinedweb | 163 | 58.38 |
Because you have to pass filename as option, and not data on stdin.
Use xargs for that:
locate file.ext | xargs xdg-open
or just subshell:
xdg-open "$( locate file.ext )"
Use this instead of wordwrap:
echo nl2br($output);
transforms line ending characters (
) to <br />s
or combine:
echo wordwrap(nl2br($output), 180, "<br />
");
or use <pre> for preformatted code:
echo "<pre>" . wordwrap($output, 180) . "</pre>";
Use :susp or <C-z>. fg in shell to restore vim.
Normally scrollback is available through <C-PageUp> though thus
avoiding one line scroll at all costs is not necessary.
>>> import pymongo
>>> c = pymongo.MongoClient()
>>> c['admin'].command('serverStatus',
workingSet=True)['workingSet']
{u'note': u'thisIsAnEstimate', u'computationTimeMicros': 4555,
u'pagesInMemory': 7, u'overSeconds': 388}
You need to create a window if you want to display a label. Basically
something like this (not tested):
QMainWindow* win = new QMainWindow();
QLabel *label = new QLabel(win, "Hello World!!!");
label->show();
win->show();
One way is to create a shell script containing the commands you want and
then run the shell script.
Since the Android root filesystem is not writeable at run time (usually,
unless you have rooted your device and remount it), you can copy the file
to the removable (or emulated) storage, for example /sdcard.
Then run the script using the command adb shell sh
/sdcard/your-script-name. Because each script runs in its own subshell,
both of your commands will be executed in the same shell on the device (you
can confirm it with ps).
Try replacing
addstr("Hello world");
with
printw("Hello World !!!");
see
fpings=$( {fping -c 1 -t 1 $ips | sort; } 2>&1 )
should work the {} capture everything and then it redirects both streams
(out and err) to just out and saves in the variable
Use the subprocess module:
subprocess.check_output returns the output of a command as a string.
>>> import subprocess
>>> print subprocess.check_output.__doc__
Run command with arguments and return its output as a byte string.
If the exit code was non-zero it raises a CalledProcessError. The
CalledProcessError object will have the return code in the returncode
attribute and output in the output attribute.
The approach should be
Write onchange event (onchange of textbox content call the process()
function) for the text box.
or
To add a button to triggering the function. => user enters the dish and
clicks on button.
Approach one (Using jQuery)
<!DOCTYPE>
<html>
<head>
<script type="text/javascript"
src="jquery-1.9.1.min.js"></script>
<!--<script type="text/javascript"
src="foodstore.js"></script>-->
<script>
$(document).on("keyup","#inputuser", function(){
var dish = $(this).val();
$.ajax({
type: "GET",
url: 'foodstore.php',
data : {food:dish},
success: function(data){alert(data);
$('#usererror').html(data);
}
});
This is the correct form:
(flycheck-declare-checker unity-csharp-flychecker
"given a c-sharp file, looks for the unity file and then tries to build
it using mdtool."
:command '("mdtool" "build"
(eval (process buffer-file-name)))
...)
Thanks for Bruce Conner for getting me to remove the quote before (process
...). That gave me a new error:
Error: (void-variable source-original)
So I dug into the source and saw there is no symbol substitution before
evaluation. I assumed because we were given symbols that just using
buffer-file-name wouldn't work, but I tried it and it does. I don't know if
there are ramifications for that approach down the road.
You could try using evalc to capture the output to a variable. This way it
is not displayed in the command window.
for example
sim('model')
produces output, whereas:
myCommandWindowOutput = evalc('sim(''model'')');
doesn't.
In fact, you don't even need to assign the output, you could just write:
evalc('sim(''model'')');
$('#content div') hides all div inside #content, even the indirect children
of #content element, when you display it back, you are displaying only the
direct child div.
When you hide the elements, you need to hide only the direct divs
Try
$('.mainmenuitem').click(function(){
$('#content > div').hide();
$('#' + this.id + '-content').show();
});.
What about using awk?
xm list | awk '/^test2/ {print $2}'
I added ^ in /^test/ so that it checks this text in the beginning of the
line. Also awk '$1=="test2" {print $2}' would make it.
Test
$ cat a
Name ID Mem VCPUs State
Time(s)
Domain-0 0 505 4 r-----
11967.2
test1 28 1024 1 -b----
137.9
test2 33 1024 1 -b----
3.2
$ awk '/^test2/ {print $2}' a
33
Have a look at the InfoMessage event. It may send what you need, I don't
have DB2 so that I can test.
As an aside, there are other free SQL tools you can find online since you
don't have SQL Manager any more.
EDIT: I misunderstood. I thought you wanted output like what you get from
PRINT statements and some system commands. DarkFalcon's answer is
correct... you can do a simple for loop and output reader[i]. You may want
to look at the column's data type to assist in formatting the output but
you can easily iterate through it all.
Will this work for you? It prints the files in the order you specified,
but it won't print them in color. In order to do that, you'd need to strip
the ANSI codes from the names before pattern-matching them. As it is, it
will handle filenames with embedded spaces, but not horribly pathological
names, like those with embedded newlines or control characters.
I think the awk script is fairly self-explanatory, but let me know if you'd
like clarification. The BEGIN line is processed before the ls output
starts, and the END line is processed after all the output is consumed.
The other lines start with an optional condition, followed by a sequence of
commands enclosed in curly brackets. The commands are executed on (only)
those lines that match the condition.
ls -ahlF --color=none | awk '
If you mean how to get output from the modem, use read, readline or
readlines methods of ser. See tutorial.
try this:
df -k | tr -s " " | sed 's/ /, /g' | sed '1 s/, / /g'
and see this
Swap space used is determined by your 'swappiness' system value.To find
your current setting, try:
cat /proc/sys/vm/swappiness
The value can range from 0-100, with 100 being agressive swapping, and 0
meaning swap is only used when your RAM is at capacity.To adjust the value
temporarily, try:
echo $YOURVALUE > /proc/sys/vm/swappiness
and to adjust it permanently, add a sysctl option
echo $YOURVALUE >> /etc/sysctl.conf; sysctl -p
Buffers and cache are for commonly opened and executed commands. Don't
worry about them as space being "used", the kernel will automatically free
up that space if the RAM is needed. You can force clear the cache with the
following command (though it's really not needed):
sync; echo 3 > /proc/sys/vm/drop_caches
You can try this:
echo 'mypassword' | openssl enc -d -aes256 -in somefile | less
But this doesn't look secure.
I have not tried running openssl this way, but in case it is too verbose
and if previous code is not going to work then you can always try using
expect. Here's an example:
expect -c '
spawn yourscript
expect "Please enter your password:"
send "$PASSWORD"
'
(time ls -l) 2>&1 > /dev/null |grep real
This redirects stderr (which is where time sends its output) to the same
stream as stdout, then redirects stdout to dev/null so the output of ls is
not captured, then pipes what is now the output of time into the stdin of
grep.
You are getting the exit status. According to os.system() docs:
On Unix, the return value is the exit status of the process encoded in
the format specified for wait().
...
The subprocess module provides more powerful facilities for spawning
new processes and retrieving their results; using that module is
preferable to using this function.
So, if you want to retrieve the output of the command, use subprocess
module. There are tons of examples here on SO, e.g.:
Assign output of os.system to a variable and prevent it from being
displayed on the screen
Equivalent of Backticks in Python
You can't do it the way you're going about it. Connection to host.com
closed. is output by the command you called, and not returned via STDOUT,
which you could capture with backticks.
The problem is the use of backticks. They don't capture STDERR, which is
most likely what ssh is using when it outputs its status.
The fix is to use Open3's methods, such as capture3 which will grab the
STDOUT and STDERR streams returned by the called program, and let you
output them programmatically, allowing you to align them:
stdout_str, stderr_str, status = Open3.capture3([env,] cmd... [, opts])
You should also look at Ruby's printf, sprintf or String's % method, which
calls sprintf. Using % you can easily format your strings to align:
format = '%7s %s'
puts format % ["------>", "Executing:"]
p
I don't mean to step on @RudyVisser's answers in comments, but here's
another solution:
$syntax = 'innobackupex --user="'.$mysql_user.'"
--password="'.$mysql_pass.'"
--databases="'.$mysql_db.'" --stream=tar ./ | gzip -c -1
> /var/bak/2013-08-09-1431_mysql.tar.gz ; echo $?')
$exit_status = shell_exec($syntax);
The echo inside the command should report the exit status of innobackupex,
which is 0 if the backup was successful, and non-zero if there was an
error.
PS: Percona XtraBackup also has a --compress option that uses the qpress
algorithm, known to be very fast. I mention this because I notice you're
using gzip -1 presumably for better performance.
Ok, I was confused in my other answer. In any case, the philosophy in this
answer is the same. You can use directly the popen function.
Then you have something like this:
int numOfCPU;
FILE *fp = popen("grep -c ^processor /proc/cpuinfo", "r");
fscanf(fp, "%d", &numOfCPU);
pclose(fp);
I hope it will be useful.
You could use blocks in shells to insert another command and use it to
insert lines before or after the output of the other command e.g. echo
before grep:
ps | { echo "header"; grep "something"; }
To make it easier for you in a script you could use this form:
ps | {
echo "header"
grep "something"
# possibly other echos here.
}
In awk you could use BEGIN:
ps | awk 'BEGIN { print "header"; } /something/;'
And/or END to add tailing lines:
ps | awk 'BEGIN { print "header"; } /something/; END { print "------"; }'
Of course if you have more than two commands you could just use the form on
the last
command | command | { echo "header"; grep "something"; }
Or
command | command | awk 'BEGIN { print "header"; } /something/;'
Have you considered using job dependencies and post-process the logfiles?
1) Run each "child" job (removing the "-Is") and output the IO to separate
output file. Each job should be submitted with a jobname (see -J). the
jobname could form an array.
2) Your final job would be dependent on the children finishing (see -w).
Besides running concurrent across the cluster, another advantage of this
approach is that your overall process is not susceptible to IO issues.
A good but "heavyweight" solution is to use Twisted - see the bottom.
If you're willing to live with only stdout something along those lines
should work:
import subprocess
import sys
popenobj = subprocess.Popen(["ls", "-Rl"], stdout=subprocess.PIPE)
while not popenobj.poll():
stdoutdata = popenobj.stdout.readline()
if stdoutdata:
sys.stdout.write(stdoutdata)
else:
break
print "Return code", popenobj.returncode
(If you use read() it tries to read the entire "file" which isn't useful,
what we really could use here is something that reads all the data that's
in the pipe right now)
One might also try to approach this with threading, e.g.:
import subprocess
import sys
import threading
popenobj = subprocess.Popen("ls", stdout=subprocess.PIPE, shell=True)
def stdoutpr).
The file in question did not exist in revision 1 (it was probably added at
revision 2).
You do not have to worry about the exact meaning of the @@ strings, it just
helps svn to locate the changes. In fact, it denotes the position (line
number and number of following lines) in the file where the changes
happened.
pairs = %x{rpm -qi ruby}
.split(/(?<!:)s{2,}(?![s:])|#$//)
.map{|line| line.split(/s*:s+/, 2)}
width = pairs.map{|pair| pair.first.length}.max
pairs.each{|k, v| puts "#{k.ljust(width)}: #{v}"}
It might be printing that to stderr. Try redirecting that one to PIPE as
well and read data from there. You can also append 2>&1 to the end
of the command to get stderr redirected to stdout by the shell. You might
have to add shell=True for that.
In case some output is going to standard error:
OUT=$(git status > /dev/null 2>&1; echo $?)
Of course, this does leave open the question: what is it you want to
capture in OUT?
[EDIT]
The above will put the return code of git into $OUT.
You need to run a sed command to replace '/n' with '' in the email conteent
that is being sent.
Emails are rendered as HTML text, hence the /n is ommited.
makes the html show it on next line.
You can redirect stdout to /dev/null.
yum install nano > /dev/null
Or you can redirect both stdout and stderr,
yum install nano &> /dev/null.
But if the program has a quiet option, that's even better.
Navigate to the folder containing the executable that you have built in
Visual
Studio, open a console in that folder and at the console prompt enter e.g.
>my_prog > my_output.txt
Hmm..
First point - why do you need alphabetical order if your intention is to do
some variety of "random" selection?
Next - why are you setting _tmp - you aren't using it.
Next - the brackets in your IF "%CD%"... are redundant - the quotes are
there to tell batch that the string may contain separators
Next - the /on switch tells DIR to output the SELECTED directory in
alphabetical order.
Next - _t0 appears to be intended to select 'lop 0' or 'lop 1' characters,
but a DIR /s /b output will be X:dir...filename - doesn't sem particularly
sensible...
Next - you seem to have omitted the /i switch from your FIND commands to
make the find case-insensitive.
You've not shown your directory structure, so "season order" is nebulous.
Presumably you've got ..shownameseasonepisode. Consider what hap | http://www.w3hello.com/questions/workingSet-command-doesn-39-t-show-output | CC-MAIN-2018-17 | refinedweb | 2,403 | 64.61 |
Due.
This section consists of five rules.
Rule T.81 is too loosely related to templates and hierarchies, and rule T.82 is empty; therefore, my post boils down to the three remaining rules.
I will write about the rules T.80 and T.84 together because T.84 continues the story of T as an example a naively templatized class hierarchy from the guidelines:
template<typename T>
struct Container { // an interface
virtual T* get(int i);
virtual T* first();
virtual T* next();
virtual void sort();
};
template<typename T>
class Vector : public Container<T> {
public:
// ...
};
Vector<int> vi;
Vector<string> vs;
Why is this due to the guidelines naively? This is in particular naively because the base class Container has many virtual functions. The presented design introduces code bloat. Virtual functions are instantiated every time in a class template. In contrast, non-virtual functions are only instantiated if they are used.
A simple test with CppInsight proves my point.
The following program uses a std::vector<int> and a std::vector<std::string>.
CppInsight shows it. No method of std::vector is instantiated.
Here is the simplified program of the C++ core guidelines including the virtual function sort.
Now, the virtual function sort is instantiated. Here is only the output of CppInsight which shows the instantiation of the class template Container.
In total, I get 100 lines of code in the case of the virtual function.
The note to the guidelines gives a hint on how to overcome this code bloat. Often you can provide a stable interface by not parameterising a base. This brings me to the related rule T.84: Use a non-template core implementation to provide an ABI-stable interface.
Okay, I already have written about this technique in the post C++ Core Guidelines: Template Definitions. Rule T.84 mentions an alternative way to address a stable interface: Pimpl.
Pimpl stands for "pointer to implementation" and means to remove implementation details of a class by placing them in a separate class, accessed through a pointer. This technique should be in the toolbox of each serious C++ programmer. Pimpl is often also called compilation firewall because this technique breaks the dependency between the implementation and the users of the class interface. This means the implementation can be changed without recompiling the user code.
Here is the general structure from Herb Sutters Blog: GotW #100: Compilation Firewalls.
// in header file
class widget {
public:
widget();
~widget();
private:
class impl; // (1)
unique_ptr<impl> pimpl;
};
// in implementation file // (2)
class widget::impl {
// :::
};
widget::widget() : pimpl{ new impl{ /*...*/ } } { }
widget::~widget() { }
This are the points of this idiom.
Okay. Let me show a full example based on the one from cppreference.com.
// pimpl.cpp
#include <iostream>
#include <memory>
// interface (widget.h)
class widget {
class impl;
std::unique_ptr<impl> pImpl;
public:
void draw();
bool shown() const { return true; }
widget(int);
~widget();
widget(widget&&) = default;
widget(const widget&) = delete;
widget& operator=(widget&&);
widget& operator=(const widget&) = delete;
};
// implementation (widget.cpp)
class widget::impl {
int n; // private data
public:
void draw(const widget& w) { // (1)
if(w.shown())
std::cout << "drawing a widget " << n << '\n';
}
impl(int n) : n(n) {}
};
void widget::draw() { pImpl->draw(*this); } // (2)
widget::widget(int n) : pImpl{std::make_unique<impl>(n)} {}
widget::~widget() = default; // (3)
widget& widget::operator=(widget&&) = default;
// user (main.cpp)
int main()
{
std::cout << std::endl;
widget w(7);
w.draw();
std::cout << std::endl;
}
The draw call of the class impl uses a back-reference to widget (lines (1) and (2)). The destructor and the move assignment operator (line (3)) must be user-defined in the implementation file base because std::unique_ptr requires that the pointed-to type is a complete type.
The program behaves as expected:
Besides its pros, the pimpl idiom has two cons: there is an additional pointer indirection, and you have to store the pointer.
The last guideline for this post is about a typical misconception.
Let me try to use a virtual member function template.
// virtualMember.cpp
class Shape {
template<class T>
virtual bool intersect(T* p);
};
int main(){
Shape shape;
}
The error message from my GCC 8.2 compiler is crystal-clear:
The next guidelines and, therefore, my next post is about variadic templates. Variadic templates are templates that can accept an arbitrary number of arguments. The rules in the guidelines to variadic templates as many rules to templates consist only of the headings. This means I write in my next post more general about variadic76
Yesterday 6041
Week 35932
Month 149822
All 10307584
Currently are 153 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | https://www.modernescpp.com/index.php/c-core-guidelines-rules-for-templates-and-hierarchies | CC-MAIN-2022-40 | refinedweb | 769 | 57.37 |
tag:blogger.com,1999:blog-41494669822299374902017-02-08T20:53:10.599-08:00The WeTab.barsoum (Etheros) the x86 Android distribution onto the WetabHi folks, Long time no see. So I've put away the WeTab almost for good but then a couple of days ago I got to thinking it could be a nice central system to just spend a few minutes maybe browsing around some cooking videos on youtube in the kitchen. So I got up to my tinkering business again and whaddya know I actually learned something new in the process. <br/><br/>First off I downloaded the <a href="">latest release</a> of the <a href="">Android X86 distro</a> which looks like it's some port of the cyanogenmod system. <br/><br/>So previously I had mentioned in some install blog that with USB sticks it is a hit or miss...well apparently it is nothing physical, it's actually just a silly magic set of bytes that needs to be written someplace on the USB stick. I found this out when I started getting an Error 17 (or was it Error 7, I forget). In any case, according to these nice people at the <a href="">WeTab community</a> the USB stick needs to have that written somewhere in the first 400 bytes or so. <br/><br/>I finally managed to write the code on to the stick but not with the printf examples I found on-line. <br/><br/>Instead I opened up Hexfiend and created a file with the sole 4 bytes I needed: 9d 2a 44 7b <br/><br/>I named the file test.bin and then used dd to write the code where it has to go. <br/><br/>I'm not sure if I was just running into some silliness on my part but I was doing this on a mac and apparently there are two ways to refer to the physical device, either raw by /dev/rdisk2 or some other layered way with /dev/disk2. <br/><br/>Of course this needs to be done only after you've figured out what the proper path is to your USB stick by running <br/><br/>hdutil list <br/><br/>Anyhow, once you have that test.bin file created, you can then DD it to the write spot on the USB stick boot record. <br/><br/>sudo dd if=test.bin of=/dev/disk2 bs=1 seek=440 <br/><br/>Use hexdump then to ensure that you see those 4 bytes where they ought to have gone: <br/><br/>sudo hexdump -s 440 -n 4 /dev/disk2 <br/><br/><br/> First impression is I like this better than Windows on the WeTab because of the lesser performance hit. But it looks like the video rendering drivers are whacked. I still have to look around a bit to see if that's something that can easily be fixed with some boot setting. Well, enough tinkering for now, enjoy cracking open your old junk closet and digging the WeTab out :) <br/><img src="" height="1" width="1" alt=""/>barsoum (Etheros) reloaded<div dir="ltr" style="text-align: left;" trbidi="on">Hey there everyone. I apologize for the long absence, I've noticed this little blog really did actually come in handy to some people out there and for this I am glad. To those whom sent me but never received a response, I am truly sorry.<br /><br />I had given up a bit on the wetab seeing as how hot this little tablet can get. The fan blows out the heat from the vents and after a while it could get pretty uncomfortable if you're using it as a tablet. In fact, I started using it as though it were a laptop with a touch screen which was somewhat ok. This worked out best when I bought a a cover-keyboard for it:<br /><br /><div class="separator" style="clear: both; text-align: center;"><img id="irc_mi" src="" style="margin-top: 0px;" /><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="183" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="" width="200" /></a></div><br />This really helped make the tablet somewhat useful in the past year or so. But as work picked up the pace, this gizmo went into the dark corners of oblivion and I just simply didn't pick it up anymore. Although, whenever I did and started using it again, it felt awfully nice to be able to just point and click on the screen and drag windows around.<br /><br />Anyhow, today I went back and decided to crack it open again. I had seen the Windows 8 touch-screen laptops lying around in every electronics shop I've visited in the past year or so, flaunting their so-called novelty at the hapless consumer. I figured, why not see how it fairs on the Wetab. I had gone through a few websites here and there and many talk about installing Windows 8 on the ExoPC.<br /><br />First, as most resources indicate, you will need to get a USB version of the Windows 8 ISO. To do this you will need to download the Microsoft Tool for doing this <a href="" target="_blank">here</a>.<br /><br />You will then need to install this on any Windows machine you have lying around and feed it your Windows 8 ISO to create the bootable Windows 8 USB disk.<br /><br />Once you have this, you are set to go. If you think you might like to go back to your old system in the future, then first load up your Wetab system and attach an external USB with enough space to copy your whole storage for backup.<br /><br />You can then run the following dd command:<br /><br /> dd if=/dev/sda of=/media/USBWhatever/wetab-bakup.img bs=4096<br /><br />That will take a bit of time, maybe around 20 minutes.<br /><br />Now to install Windows 8, I advise using Plop boot to get your Wetab to boot from your USB drive. Although on the Wetab website (and also on this blog elsewhere), it is written that you can actually boot directly from the USB by doing some evasive combination of things (quicktouch while holding down power as soon as the blue led comes on), this has proven to be quite frustrating and only occasionally effective at best. Therefore, I truly advise you to download <a href="" target="_blank">Plop Boot manager here</a>, install it, and use it to select the USB option at start up after you load into plop boot manager.<br /><br />One thing to note is that plop for some reason doesn't really recognize your Keyboard or anything so you will have to make do with single-clicks on the quicktouch button to toggle between options and a long-click-hold on the quicktouch button to select the option you want.<br /><br />Finally, having installed plop boot, attach your Windows 8 USB drive, reboot and boot into Plop. Select USB from the menu and follow along the Windows 8 installation screens. The rest seems to be quite mundane. You shouldn't have to worry about the drivers as Windows 8 seems to pickup everything on the Wetab including the touchscreen and wireless device.<br /><br />I haven't had much time to play with it but if I get to try something interesting, I'll be sure to post. And of course if you have new and interesting ideas to share, please share!<br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Acceleration/EasingI've found Philip Merk's twofing daemon to be quite a useful feature to have running on the Wetab. I've occasionally had some beef with the acceleration and easing feature where I'm always overshooting the point in the document that I am trying to scroll to simply because the thing won't stop scrolling. Also sometimes it thinks that I'm doing a quick flick when all I'm doing is removing my two fingers from the screen.<br /><br />A quick patch helped remove the ease effect. Open up gesture.c and do a find on startEasing. Comment that line out and recompile and install. You won't get the easing effect any more. I'm not sure if I will be content with this, the easing and acceleration helped me get through long scrolling adventures but I've also had a few frustrated experiences where I just can't get it to stop scrolling once I've reached the point I want. We'll see how things go from here on.<br /><br />I'm also enjoying a different layout with the Cairo-docks panels, that along with the metacity composite manager give a nice look and feel for things. Also I've found that opening up the system fonts and cranking those up makes all the menus in firefox and all the other apps a lot easier to deal with, what with my clumsy fingers smudging all over the screen.<br /><br />Over and out.<img src="" height="1" width="1" alt=""/>barsoum (Etheros) find it very difficult to stick with any one situation or setup for too long. If you ever visit my office at work, you will find me moving my furniture around atleast once every month or two. Well, I grew tired of the regular old ubuntu panels so I went ahead and downloaded Cairo dock from the Ubuntu package manager. I'm liking it so far although it does lack a few essential applets in my opinion; for example, ther is no wifi / network manager plug-in. <br /><br />You should also know that it will need a composite manager, if you ever need to turn that off, use the gconf-editor (just run the command from a terminal) apps->metacity->general->compositing_manager and uncheck that checkbox to turn the composite manager off. Sometimes I find that these could really hog up resources and knowing how to turn it off comes in handy.<br /><br />I've also found that using the florence virtual keyboard is much more comfortable, because you can play with its opacity and still see thru it to the windows below. I am though having a bit of a nasty time with these search suggestions that firefox keeps dishing out when I'm trying to fill search forms and other fields; the suggestion box steals the focus from the keyboard and i find myself having to re-type several letters that get dropped off while I'm typing. If anyone figures out how to deal with this please don't hesitate to clue me in!<br /><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Knight on WeTab Dosbox finally useable<div dir="ltr" style="text-align: left;" trbidi="on">So I finally got that Gabriel Knight game to be useable on the Wetab with Dosbox. I must say that it took a good day of debugging through the doxbox mouse.cpp file but eventually I got myself an ok solution though I have no idea how extensible it is.<br /><br />This post by Yushatak really helped get my exploratory juices running again:<br /><br /><a href=""></a><br /><br />So according to this post all I needed was to set emulate=0 inside the mouse.cpp file for dosbox, recompile and everything would work atleast in fullscreen.<br /><br />Well further down the post Yushatak does make note of this not being universally effective. It appears that games under DOS did all sorts of weird stuff with the mouse coordinates.<br /><br />First off I had to set autolock to false inside my dosbox.conf file. That atleast got the touchscreen to interact with the game. But what would happen was that the first screen with the Intro and the game selection menu my mouse worked perfectly however, on the first scene in the game where you start in St. George's bookshop the mouse would do some very funky stuff where if you clicked anywhere near the top of the screen the clicks were fine. While when you moved down to the bottom, the cursor moved slower than your finger as you moved down the screen.<br /><br />I eventually ended up adding a few tens of printf statements to view the values of the mouse coordinates throughout the CursorMoved function and found that at some point after the Sierra intro screen and the actual game, the game would somehow reset the mouse.max_y value to 155 instead of 199 as it was on the first screen where everything worked normally. Now I followed this through and there is some sort of fraction that identifies where the mouse really is and that is multiplied by the max_y value to identify where the cursor will end up on the screen. So that explains why the cursor was moving with a varying rate as you moved down the screen...the percentage offset was changing on 0-155 pixel range while your finger was working through a 0-199 finger range. This probably sounds like gibberish but it's as close an analysis as I can come up with at the moment.<br /><br />Anyhow by hardcoding mouse.max_y=199 into my mouse.cpp, I've gotten the game to behave correctly in gameplay. Now this only works in Windowed mode. In fullscreen my mouse cursor just goes and sits in the right lower corner of the screen and won't budge. For now I will be satisfied with the windowed mode though fullscreen would have really been my preference.<br /><br />Below is the debug stuff along with the patch I added to mouse.cpp to get this to work. This file is under the src/ints folder in the dosbox source package.<br /><br /><pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code> <br />void Mouse_CursorMoved(float xrel,float yrel,float x,float y,bool emulate) { <br />printf("=====ENTER CUSRORMOVED====\n"); <br /> printf("xrel is %f\n", xrel); <br /> printf("yrel is %f\n", yrel); <br /> printf("mouse.pixelPerMickey_x is %f\n", mouse.pixelPerMickey_x); <br /> printf("mouse.pixelPerMickey_y is %f\n", mouse.pixelPerMickey_y); <br /> printf("mouse.mickey x is %f \n",mouse.mickey_x); <br /> printf("mouse.mickey y is %f \n",mouse.mickey_y); <br /> <br /> <br /> printf("mouse.senv x is %f \n",mouse.senv_x); <br /> printf("mouse.senv y is %f \n",mouse.senv_y); <br /> printf("mouse. x is %f \n",mouse.x); <br /> printf("mouse. y is %f \n",mouse.y); <br /> printf("x is %f \n",x); <br /> printf("y is %f \n",y); <br /> <br /> printf("INITIALIZATION COMPLETE\n"); <br /> float dx = xrel * mouse.pixelPerMickey_x; <br /> float dy = yrel * mouse.pixelPerMickey_y; <br /> <br /> printf("$> dx = xrel * mouse.pixelPerMickey_x = > %f = %f * %f\n",dx,xrel,mouse.pixelPerMickey_x); <br /> <br /> printf("$> dy = yrel * mouse.pixelPerMickey_y = > %f = %f * %f\n",dy,yrel,mouse.pixelPerMickey_y); <br /> <br /> if((fabs(xrel) > 1.0) || (mouse.senv_x < 1.0)) dx *= mouse.senv_x; <br /> if((fabs(yrel) > 1.0) || (mouse.senv_y < 1.0)) dy *= mouse.senv_y; <br /> <br /> if (useps2callback) dy *= 2; <br /> <br /> <br /> printf("mouse.mickey_x +=dx\n"); <br /> printf("mouse.mickey_y +=dy\n"); <br /> <br /> mouse.mickey_x += dx; <br /> mouse.mickey_y += dy; <br /> <br /> printf("mouse.mickey x is %f \n",mouse.mickey_x); <br /> printf("mouse.mickey y is %f \n",mouse.mickey_y); <br /> printf("dx is %f \n",dx); <br /> printf("dy is %f \n",dy); <br /> printf("mouse. x is %f \n",mouse.x); <br /> printf("mouse. y is %f \n",mouse.y); <br /> printf("x is %f \n",x); <br /> printf("y is %f \n",y); <br /> <br />emulate=0; <br /> if (emulate) { <br /> printf(" ==Inside emulate\n"); <br /> <br /> printf(" mouse.x +=dx\n"); <br /> printf(" mouse.x +=dy\n"); <br /> mouse.x += dx; <br /> mouse.y += dy; <br /> printf(" currently x is %f \n",mouse.x); <br /> printf(" currently y is %f \n",mouse.y); <br /> } else { <br /> if (CurMode->type == M_TEXT) { <br /> printf(" ==Inside Curmode test\n"); <br /> mouse.x = x*CurMode->swidth; <br /> mouse.y = y*CurMode->sheight * 8 / CurMode->cheight; <br /> <br /> printf(" currently CurMode->swidth is %f \n",CurMode->swidth); <br /> printf(" currently CurMode->sheight is %f \n",CurMode->sheight); <br /> printf(" currently CurMode->cheight is %f \n",CurMode->cheight); <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> <br /> <br /> } else if ((mouse.max_x < 2048) || (mouse.max_y < 2048) || (mouse.max_x != mouse.max_y)) { <br /> printf(" ==Curmode else ((mouse.max_x < 2048) || (mouse.max_y < 2048) || (mouse.max_x != mouse.max_y))\n"); <br /> if ((mouse.max_x > 0) && (mouse.max_y > 0)) { <br /> printf(" ==if(mouse.max_x > 0) && (mouse.max_y > 0)\n"); <br /> <br /> mouse.max_y = 199; <br /> printf(" just set mouse.max_y is %i \n",mouse.max_y); <br /> <br /> printf(" currently mouse.max_x is %i \n",mouse.max_x); <br /> printf(" currently mouse.max_y is %i \n",mouse.max_y); <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> printf(" currently x is %f \n",x); <br /> printf(" currently y is %f \n",y); <br /> <br /> <br /> printf(" $> mouse.x = x*mouse.max_x %f\n",x*mouse.max_x); <br /> printf(" $> mouse.y = y*mouse.max_y %f\n",y*mouse.max_y); <br /> mouse.x = x*mouse.max_x; <br /> mouse.y = y*mouse.max_y; <br /> <br /> } else { <br /> printf(" ==else(mouse.max_x > 0) && (mouse.max_y > 0)\n"); <br /> printf(" $> mouse.x += xrel\n"); <br /> printf(" $> mouse.y += yrel\n"); <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> <br /> <br /> mouse.x += xrel; <br /> mouse.y += yrel; <br /> <br /> <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> } <br /> } else { // Games faking relative movement through absolute coordinates. Quite surprising that this actually works.. <br /> printf(" ==Curmode else faking relative motion\n"); <br /> <br /> mouse.x += xrel; <br /> mouse.y += yrel; <br /> <br /> printf(" $> mouse.x += xrel\n"); <br /> printf(" $> mouse.y += yrel\n"); <br /> <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> } <br /> } <br /> <br /> /* ignore constraints if using PS2 mouse callback in the bios */ <br /> <br /> if (!useps2callback) { <br /> printf("==useps2callback\n"); <br /> if (mouse.x > mouse.max_x) mouse.x = mouse.max_x; <br /> if (mouse.x < mouse.min_x) mouse.x = mouse.min_x; <br /> if (mouse.y > mouse.max_y) mouse.y = mouse.max_y; <br /> if (mouse.y < mouse.min_y) mouse.y = mouse.min_y; <br /> printf(" currently mouse.x is %f \n",mouse.x); <br /> printf(" currently mouse.y is %f \n",mouse.y); <br /> } <br /> <br />//mouse.y = mouse.y+50; <br />printf("===END CURMOVED===\n"); <br /> <br /> Mouse_AddEvent(MOUSE_HAS_MOVED); <br /> DrawCursor(); <br />} <br /> <br /></code></pre><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) for a Guitar App<div dir="ltr" style="text-align: left;" trbidi="on">I :)</div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) WeTab Screen on Natty Narwhal<div dir="ltr" style="text-align: left;" trbidi="on">So the rotation bit turned out to be simpler than I had thought and thanks to the following link: <a href=""></a> I've gotten it working.<br /><br />So as before I have two scripts: rot.sh and norm.sh<br /><br />rot" 0 -1 1 1 0 0 0 0 1<br />xrandr -o left<br /></code></pre><br />norm" 1 0 0 0 1 0 0 0 1<br />xrandr -o normal<br /></code></pre><br />If I were awake in Linear Algebra class I would probably have realized the solution a lot earlier. In any case the first line basically represents something like the following matrix which in essence rotates the coordinate system by 90 degrees to the left. The details slip me at present but I'm sure I'll dream of matrices tonight...<br /><br />0 -1 1<br />1 0 0 <br />0 0 1<br /><br />Alrighty, over and out!</div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Narwhal Installed<div dir="ltr" style="text-align: left;" trbidi="on">This will be a shorter post. I updated my Ubuntu install to Natty Narwhal and a bunch of things sort of broke. One was the multi-touch, if I did a two finger scroll now, touch completely stops working. I later realized that it is actually the GINN daemon kicking in and taking over the clicks but not letting go of them so if you're scrolling, you can scroll up and down for ever but you can't get back your single clicks. To remedy this I went ahead and disabled GINN from the Startup Applications applet and re-enabled twofing. I've also updated to twofing 9a, see this link regarding this: <a href=""></a><br /><br />The other problem I'm working on right now is the rotation. It looks like the configuration I had described in my earlier posts doesn't apply anymore. You have to set a Transformation Matrix for the coordinate system. I haven't had enough time to sort it out yet but I will post the details as soon as I have that figured out. Other than those two things, my system seems fine, and I think it's running slightly faster actually :)<br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) double click bug solved<div dir="ltr" style="text-align: left;" trbidi="on">Hey there,<br /> So even though nobody else was complaining about twofing double-clicking when it should only be single clicking, I've gone ahead and assumed that twofing is to blame. I messed around with my drivers but everything seemed ok.<br /><br />So, I popped open gestures.c and looked through it. I added some debug prints at each of the fingerdown if statements and noticed that when I ran in debug mode (twfing --debug), nothing out of the ordinary was happening...<br />I press once and the finger down boolean switches to 1, when I lift my finger, the boolean goes back to zero...but two clicks appear to reach gnome. I remembered reading in the twofing doc that for interactions with non-special windows clicks are passed off to x...but the code on lines 509 to 516 looked suspicious:<br /><br /><pre style="background-color: #eeeeee; border: 1px dashed rgb(153, 153, 153); color: black; font-family: Andale Mono,Lucida Console,Monaco,fixed,monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code> else if (fingersDown == 0 && fingersWereDown > 0) {<br /> /* Last finger released */<br /> if (hadTwoFingersOn == 0 && !isButtonDown()) {<br /> /* The button press time has not been reached yet, and we never had two<br /> * fingers on (we could not have done this in this short time) so<br /> * we simulate button down and up now. */<br /> <br /> pressButton();<br /> releaseButton();<br /><br /><br /><br /></code></pre><br />So the comments there didn't make much sense to me but I went ahead and removed the last two statements which basically sent a (additional) mouse click.<br /><br />Ran make and launched twofing and lo and behold no more double clicks for my single-click touches. Now I still need to run with this patch a little longer to confirm that it doesn't break anything but so far everything seems to be just perfecto...!<br /><br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) twofing<div dir="ltr" style="text-align: left;" trbidi="on">Hey there. Haven't really had time to do this post but I'll dump a bunch of information here which I basically ripped off of this link: <a href=""></a><br /><br />I'm going to assume that you have Ubuntu running on your WeTab and have configured the Multitouch driver properly as I've explained in earlier posts. <br /><br />Now, download the latest twofing package...I worked with this one: <a href=";do=get&target=twofing-0.0.7.tar.gz">;do=get&target=twofing-0.0.7.tar.gz</a><br /><br />Unpack it somewhere and have alook at its content.<br /><br />Install some needed libraries:<br /><div><span style="border-collapse: collapse;"><span style="font-family: 'courier new', monospace;"><i>sudo apt-get install build-essential libx11-dev libxtst-dev libxi-dev x11proto-randr-dev libxrandr-dev</i></span></span></div><br /> Now open up the file 70-touchscreen-egalax.rules and change the Product ID to match the device ID you get from your lsusb output.<br /><br /><div><span style="border-collapse: collapse;"><i><span style="font-family: 'courier new', monospace;">SYSFS{idProduct}=="72a1"</span></i></span></div><br />Finally run "make"<br />and then "sudo make install"<br /><br />Now start twofing: twofing<br /><br />If you want to run twofing at startup use the --wait flag in your call. (<span style="color: black; font-family: courier, monospace; font-size: 14px; white-space: pre-wrap;"><i>twofing --wai</i>t )</span><br /><br /><span style="color: black; font-family: courier, monospace; font-size: 14px; white-space: pre-wrap;"></span> So now you should be able to do some fun stuff like using twofinger to scroll up and down and so on. To be honest I haven't had much use for it really, especially because it converts all single finger touches to double-clicks which annoyed the heck out of me, but occasionally it will be useful if I need to use the mouse roller (brightness applet for ubuntu doesn't seem to work any other way...). I considered modifying the code to remove the double-click "feature" but then I didn't have that much time on my hands.<br /><br />Cheers!</div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) far so good with Ubuntu<div dir="ltr" style="text-align: left;" trbidi="on">so far I'm finding Ubuntu a lot nicer to deal with, I know I promised to explain how to get twofing to work...I'll get to it soon. I actually haven't found it so useful. In fact what turned out to be really great was the Okular reader. it allows you to drag the pages with the left mouse button and opens pdfs, cbz, cbr...you name it. And to install it you just use the ubuntu package center.<br /><br />you may also want to make sure you're running with the ubuntu desktop setup and not the netbook layout because that was torture...to get that done you must logout and select a user to login as and at the bottom there should be a selection box to switch to the regular old ubuntu interface.<br />Rotating the screen as described in one of my earlier posts makes the reading experience better, except the hardware leaves a lot to be desired where the display is concerned.<br /><br />You will likely find the mouse pointer a pain...i've changed that with a little bullseye png i made and turned into a cursor using xgencursor:<br /><br /><a href="" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a><br /><br />just create a file called target.cursor and add the following:<br /><br />32 16 16 target.png<br /><br />to make the hotspot of the cursor be the center. Now run:<br /><br />$>xcursorgen target.cursor default<br /><br />you should end up with a file called default containing your new cursor, it took a bit of time to figure out where to put it to get it picked up. I can't recall exactly where i placed it but what i did was i found the folder containing the files for my current theme, backed up the cursor file. replaced it and restarted.<br /><br />Cheers from the midst of the Cairo revolution!<br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) the touchscreen firmware to 1.006h<div dir="ltr" style="text-align: left;" trbidi="on">So I've been having some really ugly experiences with the touchscreen where it doesn't regsiter my touch in a lot of instances. I've just upgraded the firmware to 1.006h following the details described here: <br /><br /><a href=""></a><br /><br />You will have to do this from inside the WeOs because I'm not sure where to get the eUpgrade tool, maybe it comes with the egalax drivers...anyhow that's not our concern at this point.<br /><br />First, make sure you have a USB Keyboard and Mouse handy.<br /><br />Now, open up a shell and<br /><br />$>cd Downloads<br />$>wget<br />$>sudo unzip TouchPanel_YFO_v1006h.zip -d /usr/share/tiitoo/firmware/<br />$>eUpgrade -f /usr/share/tiitoo/firmware/YoungFast_11p6_24x43_72A1v1006h_f04_dsab_ASG.EGXP<br /><br />Now you will see something like:<br /><br /><pre class="prettyprint"><span class="pun">*)</span><span class="pln"> EETI firmware upgrade tool version</span><span class="pun">:</span><span class="pln"> </span><span class="lit">1.03</span><span class="pun">.</span><span class="lit">1011</span><span class="pln"><br /><br /> </span><span class="pun">****</span><span class="pln"> DO NOT TOUCH THE SCREEN DURING RUNNING</span><span class="pun">!!</span><span class="pln"> </span><span class="pun">****</span><span class="pln"><br /><br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Found</span><span class="pln"> a PCAP device on </span><span class="pun">/</span><span class="pln">dev</span><span class="pun">/</span><span class="pln">hidraw0<br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Model</span><span class="pun">:</span><span class="pln"> PCAP72A1<br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Type</span><span class="pun">:</span><span class="pln"> PCAP7200 </span><span class="typ">Series</span><span class="pln"><br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Version</span><span class="pun">:</span><span class="pln"> </span><span class="lit">1.005f</span><span class="pln"><br /><br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Image</span><span class="pln"> file </span><span class="pun">[</span><span class="str">/usr/</span><span class="pln">share</span><span class="pun">/</span><span class="pln">tiitoo</span><span class="pun">/</span><span class="pln">firmware</span><span class="pun">/</span><span class="typ">YoungFast_11p6_24x43_72A1v1006h_f04_dsab_ASG</span><span class="pun">.</span><span class="pln">EGXP</span><span class="pun">]:</span><span class="pln"> </span><span class="typ">Opened</span><span class="pln"><br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Load</span><span class="pln"> image file</span><span class="pun">:</span><span class="pln"> OK<br /> </span><span class="pun">(</span><span class="pln">I</span><span class="pun">)</span><span class="pln"> </span><span class="typ">Waiting</span><span class="pun">............</span></pre><pre class="prettyprint"><span class="pun"> </span></pre><pre class="prettyprint"><span class="pun"> </span></pre>and then pinnwand will restart. DO NOT PANIC IF THE TOUCHSCREEN DOES NOT REACT TO YOUR TOUCH. Just do the eupgrade one more time, this is what happened with me,<br /><br />eUpgrade -f /usr/share/tiitoo/firmware/YoungFast_11p6_24x43_72A1v1006h_f04_dsab_ASG.EGXP<br /><br />If fortune does not side with you then you can always revert to the older firmware which you should find under the same folder: /usr/share/tiitoo/firmware/YoungFast_11p6_24x43_72A1v1005...<br /><br />Best of luck, if you worried about doing this wait for the official update...! <br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) about the touchsceen and multitouch<div dir="ltr" style="text-align: left;" trbidi="on">Hey everybody,<br /> So for the longest time I had things a bit switched up in my mind, I talked about multitouch and enabling the touchcreen as though they were one and the same...but they're not. Merely enabling the touchscreen allows the OS to register the "touch" events, mainly where you clicked and a mouse click...multitouch says that you've used more than one finger and thus two/three locations would be registeted instead.<br /><br />In my post about installing ubuntu I don't make much of a distinction, in fact I used the terms interchangeably. All that changed when my friend Reiner told me about twofing, which basically captures multitouch events and does special things with particular windows such as scroll forward in evince if you swipe two fingers acoss the screen.<br /><br />Well first off et me describe how to get the multitouch driver installed. Please note that if you had gotten the touchscreen to work using the method described in the Ubuntu installation post, you will need to undo these changes...namely remove the usbhid.quirks changes to the GRUB file and remove the 99-calibration.conf file rom /etc/X11/xorg.conf.d/<br /><br />Next you will need to make sure you have added the utouch-team repository to the ubuntu packager (you may not need the unstable repo anymore)<br /><br />sudo add-apt-repository ppa:utouch-team/utouch<br />sudo add-apt-repository ppa:utouch-team/unstable <br /><br />sudo apt-get upgrade<br />sudo apt-get update<br /><br /><br />Next, download the egalax drivers:<br /><br /><br />sudo apt-get install hid-egalax-dkms<br /><br /><br />You should now be good to go. Restart the machine. For some reason I was getting weird behavior on my first restart and had to manually shut down the machine by holding down the power button. But the next time it loaded up it was fine...so in case you run in to any problems...restart again, if it persists across more restarts then you have a different issue.<br /><br /><br />Now if you recall, we had rotation scripts...well those won't work anymore, atleast not the way they are at the moment. Instead your files should now look like so:<br /><br /><br /><br />#normal.sh<br /><br />#!/bin/sh<br /><br />xrandr -o normal<br />xinput set-int-prop 9 "Evdev Axes Swap" 8 0<br />xinput set-int-prop 9 "Evdev Axis Calibration" 32 0 32760 0 32760<br />xinput set-int-prop 10 "Evdev Axes Swap" 8 0<br />xinput set-int-prop 10 "Evdev Axis Calibration" 32 0 32760 0 32760<br /><br /> -----------------------------------------------<br /><br /><br /># rot.sh<br />#!/bin/sh<br /><br />xrandr -o left<br /><br />xinput set-int-prop 9 "Evdev Axes Swap" 8 1<br />xinput set-int-prop 9 "Evdev Axis Calibration" 32 32760 0 0 32760<br />xinput set-int-prop 10 "Evdev Axes Swap" 8 1<br />xinput set-int-prop 10 "Evdev Axis Calibration" 32 32760 0 0 32760 <br /><br /><br />It took me a bit to figure out the settings, but basically it looks like the hid-egalax driver granulates the screen differently (and possibly with different scales depending on what axis).<br /><br />You should now be running with multitouch support and at the same point where the ubuntu installation post left off.<br /><br />The next bit just explains where the coordinates above came from. If you're not interested in that, well then you're done and can go enjoy!<br /><br />If you run xinput_calibrator you will get some output that looks like:<br /><br />Warning: multiple calibratable devices found, calibrating last one (eGalax Inc. USB TouchController)<br /> use --device to select another one.<br />Calibrating EVDEV driver for "eGalax Inc. USB TouchController" id=10<br /> <b> current calibration values (from XInput): min_x=0, max_x=32760 and min_y=0, max_y=32760</b><br /><br />Doing dynamic recalibration:<br /> Setting new calibration data: -29, 32920, 387, 32606<br /><br /> Now I'm mainly interested in the bolded line, it tells us what the default values for the perfectly functional landscape setup is. The new calibration is ok, but you will find that it isn't perfect.<br /><br />Ok so let's cut that up, it seems to say that in landscape mode, the lower left corner of the screen is at coordinates (min_x,min_y) which is (0,0).<br /><br />What about the max? That's our upper right corner of the screen, (max_x,max_y) which is (32760,32760). <br /><br />So the calibration string with min_x, max_x, min_y, max_y will be:<br />0 32760 0 32760 <br /><br />Ok, well what happens when we rotate the screen. Well we need to set the new lower left corner of our screen but using the same coordinate system as before. So our screen's origin (new lower-left corner) is now where (32760,0) used to be and the new upper right corner is at (0,32760) thus the new calibration string with min_x, max_x, min_y, max_y will be: <br />32760 0 0 32760 <br /><br />Well that's all folks. Next time I will explain how I got twofing to work.<br />Cheers!</div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Rotation with Ubuntu on WeTab<div dir="ltr" style="text-align: left;" trbidi="on">Update 26 OCT 2011:<br />If you're running Ubuntu 11+ please check out this post instead: <a href=""></a><br /><br />======================================<br /><br />So holding up the WeTab for long periods of time can be a pain. Sometimes it seems that holding it up on it's side might be more comfortable, especially when you're lying down in bed and trying to read something.<br /><br />well as it so happens, you can rotate the screen using xrandr. You might not like what you see but atleast you have the option.<br /><br />Well first do an xinput -list to see the devices you can interact with to fix up calibration through a script:<br /><br />xinput -list<br />⎡ Virtual core pointer id=2 [master pointer (3)]<br />⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]<br /><span style="font-size: x-small;"><b>⎜ ↳ eGalax Inc. USB TouchController id=9 [slave pointer (2)]<br />⎜ ↳ eGalax Inc. USB TouchController id=10 [slave pointer (2)]<br />⎜ ↳ eGalax Inc. USB TouchController id=11 [slave pointer (2)]</b></span><br />⎜ ↳ ImPS/2 Generic Wheel Mouse id=15 [slave pointer (2)]<br />⎜ ↳ Microsoft Mouse id=16 Power Button id=8 [slave keyboard (3)]<br /> ↳ USB 2.0 Camera id=12 [slave keyboard (3)]<br /> ↳ Asus Laptop extra buttons id=13 [slave keyboard (3)]<br /> ↳ AT Translated Set 2 keyboard id=14 [slave keyboard (3)]<br /> ↳ Microsoft Keyboard id=17 [slave keyboard (3)]<br /><br />Note the ids for "<span style="font-size: x-small;"><b>eGalax Inc. USB TouchController" </b></span><br /><span style="font-size: x-small;"><b><br /></b></span><br /><span style="font-size: x-small;"><span style="font-size: small;">As a quick test to check which device really is the one we need to calibrate we can take each of the ids and plug it in to the following command and then touch the screen with a finger, if the Axes seem to be inverted, we know we have the right device:</span></span><br /><br /><span style="font-size: x-small;"><span style="font-size: small;">xinput set-int-prop 9 "Evdev Axes Swap" 8 1 </span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">xinput set-int-prop 10 "Evdev Axes Swap" 8 1</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">xinput set-int-prop 11 "Evdev Axes Swap" 8 1</span></span><br /><span style="font-size: x-small;"><br /></span><br /><span style="font-size: x-small;"><span style="font-size: small;">Once you know which device id is the right one (mine was 10), you can restore the proper axes by switching the 1 to a 0:</span></span><br /><br /><span style="font-size: x-small;"><span style="font-size: small;"> </span></span><span style="font-size: x-small;"><span style="font-size: small;">xinput set-int-prop 10 "Evdev Axes Swap" 8 0 </span></span><br /><br /><span style="font-size: x-small;"><span style="font-size: small;"> you should now create the following scripts that will help you rotate and restore your view.</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br /></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">#rot.sh</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">#!/bin/sh<br /><br />xrandr -o left<br />xinput set-int-prop 10 "Evdev Axes Swap" 8 1<br />xinput set-int-prop 10 "Evdev Axis Calibration" 32 4052 36 35 4156</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br /></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">#norm.sh</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">#!/bin/sh<br /><br />xrandr -o normal<br />xinput set-int-prop 10 "Evdev Axes Swap" 8 0<br />xinput set-int-prop 10 "Evdev Axis Calibration" 32 -5 4100 59 4100</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br /></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">* remember to change the device id 10 to whatever works for you. I describe next what to do about the calibrations.</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br />Finally make sure you download and install xinput_calibrator: <a href=""></a> (the debian package should do).</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br /></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">Now do a xrandr -o left and run the xinput_calibrator. The output should be the calibration parameters that you place on the last command of both scripts (evdev axis calibration).</span></span><br /><span style="font-size: x-small;"><span style="font-size: small;"><br /></span></span><br /><span style="font-size: x-small;"><span style="font-size: small;">Finally run the scripts and enjoy the proper calibrations of both landscape and portrait viewing on your WeTab!</span></span><br /><br /><br />UPDATE 26 FEB 2011:<br />---------------------------------------<br />The above instructions will work only if you are running with single-touch, to enable multitouch and have rotation work correctly please see: <a href=""></a><br /><br /> </div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Onboard keyboard problem<div dir="ltr" style="text-align: left;" trbidi="on">So while messing around with the onboard keyboard I double-clicked it's top bar by mistake and ended up with the keyboard totally covering my screen. To further complicate things, in UNR, the maximize/close/minimize buttons are hidden, so there is basically no way to change the size of the onboard window once it is maximized.<br /><br /><br /><br />After a good hour or so of tinkering I fell upon this discussion:<br /><br /><br />So basically all you need to do is use a normal keyboard to launch gconf-editor from a terminal and then go under /apps/Onboard and change the Height and Width settings and relaunch onboard and it should be fine from then on.<br /><br />One mistake I did make was I had su-ed and was configuring the wrong profile, which prolonged my confirming the solution.<br /><br />debb1046 mentioned twofing which should allow you to use two fingers to scroll and do some fancy multitouch things with ubuntu (I believe you have to make sure you install the ppa/utouch).<br /><br /><br />Cheers</div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Ubuntu Installation<div dir="ltr" style="text-align: left;" trbidi="on"><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="222" src="" width="400" /></a></div><br />So I've finally had the time to open up my WeTab again and start fiddling around with it. Actually I had sort of given up on the WeTab a bit, especially after the seeing the Bookman reader on IPad work so beautifully and then comparing that with the readers for WeTab...they don't compare unfortunately. Additionally the performance of the OS leaves a lot to be desired.<br /><br />In any case, I've made my peace with the WeTab and started caring for it once more. So here without ado, I explain how I went about installing Ubuntu on it.<br /><br />1- Download the NetBook Remix 10.10 iso (Maverick)<br />2- Download the Universal Ubuntu USB installer on a windows machine (You could also use Ubuntu to do it, refer to the link I provide below).<br />3- Download GParted Live (USB version)<br />4- Make available a USB flash drive, preferably with an LED light<br />5- USB Mouse and Keyboard (preferably wireless with one receiver to make things easier otherwise you will be doing quite a bit of plugging and inserting)<br />First off you will need to put GParted on the flash drive ().<br /><br />Now you should be ready to repartition your WeTab Flash Drive. To do this, connect your USB Flash to the USB port on your WeTab and boot from the USB drive.<br /><br />To boot from the USB disk is kind of tricky, have a look at the developer page: <a href=""></a><br />(HowTo – Install WeTab OS with Recovery USB Boot Stick)<br /><br />Basically you turn on the Wetab and then<b> <span style="color: blue;">as soon as you see the blue led</span></b> in the top left corner light up, <b style="color: red;">press both the power-button and the quicktouch button</b> (top left corner) <b style="color: red;">together for approximately 1 second</b>. It takes a bit of practice to get that just right so don't give up if it doesn't work right away but what should happen then is that your GParted Live should boot up from the USB flash drive.<br /><br />You can use the quicktouch button on the upper left corner of the WeTab to browse through the Boot menu for Gparted. One touch switches between the option, if you hold down on the quicktouch button, it selects the entry. I chose "other modes of Gparted" and then "Run from RAM or memory", this way you can pull out the USB flash drive once it is done loading up the OS.<br /><br />Once you're in GParted you will want to started up the GParted application (if it doesn't automatically load up). Select the sda3 partition (the biggest one) and then Resize. My WeTab has 32GB, So I resized mine to allow 8 GB for Ubuntu.<br /><br />It should take about 15 minutes to resize the partitions.<br /><br />Once that is done you should create a USB Installer for Ubuntu, just follow the directions on <a href=""></a> where it says Create a USB Drive (click show me how).<br /><br />Again now, just like we had booted up the GParted image off of our USB flash, you should boot up the Ubuntu Installer from the flash drive.<br /><br />Once it loads up and asks you to Try or Install, choose Install Ubuntu. Follow along on the screens to come. When it asks you where you would like to Install the system, choose "Install along side another OS". At this point you can just let it go on and do it's thing however, the bootloader will be replaced and you will get Ubuntu's GRUB if you reboot. I chose instead the "Manual" option and basically clicked on the unallocated space, created an ext3 partition and marked it as / (root) for Ubuntu to install on it. On the bottom where it asks you where the Bootloader should go, I selected sda4 which is the new partition we created earlier.<br /><br />Go ahead and Install Ubuntu and then once it is done, restart and go into your original WeTab Os.<br /><br />You must now modify the extlinux.conf file under /boot/extlinux/extlinux.conf in order to load up Ubuntu.<br /><br />Add the following lines at the bottom of you extlinux.conf<br /><br />label Ubuntu<br />menu label ^Ubuntu<br />KERNEL chain.c32<br />APPEND hd0 4<br /><br />Notice the Append tells extlinux that the logical partition sought for booting is on harddrive 0 and partition 4 (/dev/sda4), where we installed Ubuntu along with it's own GRUB loader.<br /><br />Next we need to modify the GRUB settings for our Ubuntu installation to enable the touchscreen. Restart the machine and go into your Ubuntu install from the bootup menu (use the quicktouch button to select the Ubuntu install).<br /><br />Once you are inside Ubuntu, open up a terminal session (click the ubuntu button on the top left corner, type "terminal" in the search bar and then double click the icon when it shows up.<br /><br />Now to enable the touchscreen we must follow Samiux's and W3C's directions ( and but follow along don't do what is on those pages, I'll cut and paste here).<br /><br />First off you want to change the GRUB loading parameters as Samiux describes, but you need to follow W3C's advice regarding the correct hexadecimal value to add in. So first do an lsusb:<br /><br />$>lsusb <br />Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<br />Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<br />Bus 005 Device 002: ID 04d9:a015 Holtek Semiconductor, Inc. <br />Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<br />Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<br /><b>Bus 003 Device 002: ID 0eef:72a1 D-WAV Scientific Co., Ltd</b> <br />Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<br />Bus 002 Device 004: ID 04f2:b213 Chicony Electronics Co., Ltd <br />Bus 002 Device 003: ID 12d1:1404 Huawei Technologies Co., Ltd. <br />Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br /><br />note the ID value and place the same values as follows in your GRUB configuration file, Grub 2.o uses a cfg file so go to your /etc/default/grub and modify it to look like the following.<br /><br /></a><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) recovery, BIOS and stuffOk so in my previous post I complained loudly about there being no obvious specialness to the BIOS on the WeTab and that all the WeTab-specific shtuff (including the recovery program) existed on the SDD only. <br /><br />I rescind that statement, I was...sadly...but thankfully....WRONG! <br /><br /. <br /><br /).<br /><br /.<br /><br /. <br /><br /:=tales_of_invention;theme=a_taste_of_tedglobal_2010;theme=how_the_mind_works;event=TEDGlobal=tales_of_invention;theme=a_taste_of_tedglobal_2010;theme=how_the_mind_works;event=TEDGlobal+2010;"></embed></object><br /><br />A couple of years from now and that headset will probably be as comfortable as earphones I suppose.<br /><br /. <br /><br />Well enough day-dreaming for one day. <br /><br />Hmmm...I wonder what the EFI BIOS looks like and how to mess around with that...I never learn...<img src="" height="1" width="1" alt=""/>barsoum (Etheros) updateSo.<br /><br />I wasn't really upto diagnosing so I followed the Developer notes on recovering: <a href=""></a>..<br /><br />After that I found a nice LiveCD for GParted (<a href=""></a>) to repartition the SDD so that I could install Meego (<a href=""></a>). <br /.<br /><br /.<br /><br />Anyhow, I'm not too happy hanging in the wind with the Meego loader. I've decided to burn back my original /dev/sda I had made way back when I wrote the post on backing up (<a href=""><...<br /><br /).<br /><br /.<br /><br />Apart from this I'm still trying to sort out whether I like this gizmo, it's far too bulky to sit comfortably with and it heats up like a mother-father. Well, until later folks, adios!<img src="" height="1" width="1" alt=""/>barsoum (Etheros) widget iconsthe following forum post is full of nice icons for your WeTab apps: <a href=""></a><br /><br />If you want to know how to map icons to apps please have a look at my old post: <a href=""></a><img src="" height="1" width="1" alt=""/>barsoum (Etheros) Break from the WeTabWell, I'm on a work trip in Helsinki, so I don't have much time to mess around with my WeTab, should be back to my free self in a couple of days.<img src="" height="1" width="1" alt=""/>barsoum (Etheros) Right-Click Mouse Button as soft-touchOk, so this one kept me up a good full night. After messing around with Vnc and other stuff on the WeTab it was finally evident that I needed to right-mouse-click on some things. Resorting to a real mouse is ok but not exemplary.<br /><br />At first I thought I could add a key to the keyboard.xml for matchbox but it turns out you that clicking on the virtual keyboard modifies the current mouse position (Duh). My last resort was to see if I could get the softtouch button to be my right mouse button. To do this, I first needed a program that could simulate the right mouse button click. Fortunately for us, these two guys had the code for such a task posted on a forum <a href="">here</a>. Many thanks to Osor. (You can skip the compilation details and download the binary directly from <a href="">here</a>.)<br /><br />Get the c code for rightclick.c as found below and put in a rightclick.c on your WeTab in a build directory.<br /><br /><pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>#include <stdlib.h><br />#include <stdio.h><br /><br />#include <X11/Xlib.h><br />//#include <X11/extensions/XTest.h><br /><br />#define RMB 3 /* Right Mouse Button */<br /><br />int main(int argc, char **argv)<br />{<br /> Display *d;<br /><br /> if(!(d = XOpenDisplay(NULL))) {<br /> fprintf(stderr, "Cannot open display \"%s\".\n", XDisplayName(NULL));<br /> exit(EXIT_FAILURE);<br /> }<br /><br /> int ignore;<br /> if(!(XQueryExtension(d, "XTEST", &ignore, &ignore, &ignore))) {<br /> fprintf(stderr, "You do not have the \"XTEST\" extension enabled on your current display.\n");<br /> exit(EXIT_FAILURE);<br /> }<br /><br /> XTestFakeButtonEvent(d, RMB, True, 0);<br />#ifdef UP_AND_DOWN<br /> XTestFakeButtonEvent(d, RMB, False, 0);<br />#endif /* UP_AND_DOWN */<br /><br /> XCloseDisplay(d);<br /> exit(EXIT_SUCCESS);<br />}<br /></code></pre><br />Of course as always with any code you get, it won't compile straight off. You have to comment out the include for xtest as shown above.<br /><br />Also changes will need to be made to the makefile and a symbolic link added in your /usr/lib/.<br /><br />For the makefile, add the -L directive to point the ld linker to the correct search path for the Xtst library.<br /><br /><pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code><br />LDFLAGS += -L/usr/lib/ -lX11 -lXtst<br /><br />all: rightclick rightclick-alt<br /><br />rightclick: rightclick.c<br /> ${CC} ${CFLAGS} ${LDFLAGS} -o $@ $<<br /><br />rightclick-alt: rightclick.c<br /> ${CC} ${CFLAGS} ${LDFLAGS} -DUP_AND_DOWN -o $@ $<<br /></code></pre><br />Next create a symbolic link for libXtst like so:<br /><br />$ sudo ln -s /usr/lib/libXtst.so.6 /usr/lib/libXtst.so<br /><br />Finally do:<br />$ make<br /><br />You should now have two files in your directory: rightclick and rightclick-alt. According to the author the rightclick one sends a mouse button down event only and the other one sends a down and up event.<br /><br />Now we could assign our soft-touch button to the rightclick-alt <a href="">binary</a> and now have right clicks on our WeTabs. To see how to reassign the soft-touch button please visit my earlier post <a href="">changing the soft-touch button assignment.</a><br /><br />Also I've uploaded the <a href="">binary I have here.</a> I believe it should work for all WeTabers.<br /><br />Now go off and right-click as you wish:)<img src="" height="1" width="1" alt=""/>barsoum (Etheros) Dosbox cursor/touchscreen offsetAfter running dosbox on the WeTab and trying to get one of my all time favorite games to run: Gabriel Knight 1, I found that the touchscreen clicks were not being mapped correctly. This led me on quite a chase.<br /><br />To start with, I read some of the posts at Vogon concerning touchscreens and Dosbox. It seems that people generally agree that the problem originates inside the SDL libraries.<br /><br />I removed the 4tiitoo version of the sdllib and compiled it.<br /><br />i then set the following environment in a terminal window:<br /><br />export SDL_MOUSE_RELATIVE=0<br /><br />With this setting I get strange offsets between where I touch and where the mouse actually ends up clicking. This gap increases as I go up the screen. Have to add some debug message in the source and test.<img src="" height="1" width="1" alt=""/>barsoum (Etheros) on Multitouch and WeTab<div dir="ltr" style="text-align: left;" trbidi="on"><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">Update Feb 28 2011<br />-------------------<br />Please see the following two posts for complete instructions on installing ubuntu and enabling the multi touch driver<br /></span></span><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">-------------------------------------------------------------------------------</span></span><br /><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">The following forum posts discusses enabling touch on On Ubuntu for the WeTab: </span><a href="" style="color: #d52a33; text-decoration: none;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"></span></a></span><br /><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /></span><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"> It looks like the following two packages are needed, I haven't had a chance to test this out yet.<-egalax-dkms_1.0.3_all.deb</span></a><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"> </span></span><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /><-dkms_1.0.5_all.deb</span></a><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"> </span></span><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /></span> <span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><span class="Apple-style-span" style="line-height: 17px;"></span></span> <span class="Apple-style-span" style="line-height: 17px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">Some glitch with cursor lodging on rthe last touch position still remains to be solved.</span></span></span><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="line-height: 17px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /></span> </span></span><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="line-height: 17px;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">The following post discusses getting around that glitch: </span></span></span><br /><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><a href=""><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br class="Apple-interchange-newline" /></span></a></span><br /><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /></span><br /><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">Finally this is a reply from debb1046 explaining how to get Ubuntu running with multitouch on the WeTab:</span><br /><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><br /></span><br /><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"><span class="Apple-style-span" style="color: #2a2a2a;"></span></span><br /><pre style="line-height: 17px; white-space: normal;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">since the fix has been committed the only thing you should<br />have to do is to add the repositories of the utouch team:<;">sudo add-apt-repository ppa:utouch-team/utouch </span></pre><pre style="line-height: 17px; white-space: normal;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;">sudo add-apt-repository ppa:utouch-team/unstable<;">after apt-get update and apt-get upgrade and reboot you should have<br />multitouch working<br /><br />If I understand correctly, the new driver should make it's way into<br />maverick-proposed soon.</span></pre><pre style="line-height: 17px; white-space: normal;"></pre><pre style="line-height: 17px; white-space: normal;"><span class="Apple-style-span" style="font-family: Arial,Helvetica,sans-serif;"> If you currently start with usbquirk on the kernel command line you have<br />to remove that. Likewise, if you had copied the wetab calibration<br />(99-calibration.conf) you have to remove that also.<br /><br />In practice, the multi-touch enabled driver doesn't make a lot of<br />difference because none of the applications support it. The three and<br />four finger gestures that were shown in a video of the unity netbook<br />desktop will not work because the wetab's panel only reads two fingers.</span></pre></div><img src="" height="1" width="1" alt=""/>barsoum (Etheros) VNC from the WeTab<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="129" src="" width="320" /></a></div><br /.<br /><br />Well, first off, download the rpm from <a href="">RealVNC</a>.<br /><br />I got vnc-4_1_3-x86_linux.rpm.<br /><br />You will also need one glibc compatibility library that for some reason don't exist inside the 4tiitoo supplied version of compat-libstdc++.<br /><br />This <a href="">Fedora Core13 rpm of compat-libstdc</a> worked for me.<br /><br />Now install the compat lib package:<br /><br />sudo yum localinstall --nogpgcheck compat-libstdc++-296-2.96-143.i686.rpm <br /><br />And then throw in VNC:<br /><br />sudo rpm -i --nodeps vnc-4_1_3-x86_linux.rpm<br /><br />You should now be able to launch VNC:<br /><br />vncviewer<br /><br />That's all folks! Happy VNCing ;)<img src="" height="1" width="1" alt=""/>barsoum (Etheros) kchmviewer on the WeTabWell, I have to say I am not a fan of FBReader. It has lots of potential, but crashes far too often for my taste. <br /><br />I've tried a bunch of chm readers along the years and have found kchmviewer to be the one that suites me best. If you have better suggestions please do tell!<br /><br />Well lets get down to business. <br /><br />Download the rpm from <a href="">kchmviewer.net</a><br /><br />Next go get chmlib from <a href="">here</a>.<br /><br />Build chmlib and install it.<br /><br />mkdir chmlib<br />mv chmlib-0.40.tar.gz chmlib<br />cd chmlib/<br />tar xvfz chmlib-0.40.tar.gz <br />cd chmlib-0.40<br />./configure<br />make<br />sudo make install<br />cd ..<br /><br />Now install the kchmviewer rpm. You might have some trouble with yum not finding the lib. Just make sure the path to your so file is in your ld.so.conf.<br /><br />nano /etc/ld.so.conf<br /><br />sudo ldconfig<br /><br />sudo rpm -i --nodeps kchmviewer-5.2-1.i586.rpm <br /><br />You should now be good to go. Just enter kchmviewer and it will pop up to say hello.<img src="" height="1" width="1" alt=""/>barsoum (Etheros) | http://feeds.feedburner.com/blogspot/OdLGi | CC-MAIN-2017-47 | refinedweb | 11,056 | 61.97 |
Last Updated on September 3, 2020
Word.
Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Feb/2018: Fixed a bug due to a change in the underlying APIs.
- Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0.
Need help with Deep Learning for Text Data?
Take my free 7-day email crash course now (with code).
Click to sign-up and also get a free PDF Ebook version of the course.
Start Your FREE Crash-Course Now:
- Word2Vec.
- GloVe..
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome..
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome..
- Word Embedding on Wikipedia
- Keras Embedding Layer API
- Using pre-trained word embeddings in a Keras model, 2016
- Example of using a pre-trained GloVe Embedding in Keras
- GloVe Embedding
- An overview of word embeddings and their connection to distributional semantic models, 2016
- Deep Learning, NLP, and Representations, 2014.
Thank you Jason,
I am excited to read more NLP posts.
Thanks.
Thanks man, It was really helpful.
You’re welcome.
after embedding,have to have a “Flatten()” layer? In my project, I used a dense layer directly after embedding. is it ok?
Try it and see.
I appreciate how well updated you keep these tutorials. the first thing I always look at, when I start reading is the update date. thank you very much.
You’re welcome.
I require all of the code to work and keep working!
Hi, Jason:
when one_hot encoding is used, why is padding necessary? Doesn’t one_hot encoding already create an input of equal length?
The one hot encoding is for one variable at one time step, e.g. features.
Padding is needed to make all sequences have the same number of time steps.
See this:
I split my data into 80-20 test-train and I’m still getting 100% accuracy. Any idea why? It is ~99% on epoch 1 and the rest its 100%.
Consider using the procedure in this post to evaluate your model:
Use drop-out 20%, your model is overfit!!
Thank you Jason. I always find things easier when reading your post.
I have a question about the vector of each word after training. For example, the word “done” in sentence “Well done!” will be represented in different vector from that word in sentence “Could have done better!”. Is that right? I mean the presentation of each word will depend on the context of each sentence?
No, each word in the dictionary is represented differently, but the same word in different contexts will have the same representation.
It is the word in its different contexts that is used to define the representation of the word.
Does that help?
Yes, thank you. But I still have a question. We will train each context separately, then after training the first context, in this case is “Well done!”, we will have a vector representation of the word “done”. After training the second context, “Could have done better”, we have another vector representation of the word “done”. So, which vector will we choose to be the representation of the word “done”?
I might misunderstand the procedure of training. Thank you for clarifying it for me.
No. All examples where a word is used are used as part of the training of the representation of the word. There is only one representation for each word during and after training.
I got it. Thank you, Jason.
Hi Jason,
any ideas on how to “filter the embedding for the unique words in your training data” as mentioned in the tutorial?
The mapping of word to vector dictionary is built into Gensim, you can access it directly to retrieve the representations for the words you want: model.wv.vocab
HI Jason,
I am really appreciated the time U spend to write this tutorial and also replying.
My question is about “model.wv.vocab” you wrote. is it an address site?
It does not work actually.
No, it is an attribute on the model.
Hi, Jason
Good day.
I just need your suggestion and example. I have two different dataset, where one is structured and the other is unstructured. The goal is to use the structured to construct a representation for the unstructured, so apply use word embedding on the two input data but how can I find the average of the two embedding and flatten it to one before feeding the layer into CNN and LSTM.
Looking forward to your response.
Regards
Abbey
Sorry, what was your question?
If your question was if this is a good approach, my advice is to try it and see.
Hi, Jason
How can I find the average of the word embedding from the two input?
Regards
Abbey
Perhaps you could retrieve the vectors for each word and take their average?
Perhaps you can use the Gensim API to achieve this result?
Hi Jason,
I have a set of documents(1200 text of movie Scripts) and i want to use pretrained embeddings. But i want to update the vocabulary and train again adding the words of my corpus. Is that possible ?
Sure.
Load the pre-trained vectors. Add new random vectors for the new words in the vocab and train the whole lot together.
Hi Jason…Could you also help us with R codes for using Pre-Trained GloVe Embedding
Sorry, I don’t have R code for word embeddings.
Hi Jason, really appreciate that you answered all the replies! I am planning to try both CNN and RNN (maybe LSTM & GRU) on text classification. Most of my documents are less than 100 words long, but about 5 % are longer than 500 words. How do you suggest to set the max length when using RNN?If I set it to be 1000, will it degrade the learning result? Should I just use 100? Will it be different in the case of CNN?
Thank you!
I would recommend experimenting with different configurations and see how the impact model skill.
Dear Hao,
Did you try RNN(LSTM or GRU) on text classification?If yes then can you plz provide me the code??
Here is an example:
I’d like to thank you for this post. I’ve been struggling to understand this precise way of using keras for a week now and this is the only post I’ve found that actually explains what each step in the process is doing – and provides code that self-documents what the data looks like as the model is constructed and trained. This makes it so much easier to adapt to my particular requirements.
Thanks, I’m glad it helped.
In the above Keras example, how can we predict a list of context words given a word? Lets say i have a word named ‘sudoku’ and want to predict the sourrounding words. how can we use word2vec from keras to do that?
It sounds like you are describing a language model. We can use LSTMs to learn these relationships.
Here is an example:
No, what i meant was for word2vec skip-gram model predicts a context word given the center word. So if i train a word2vec skip-gram model, how can i predict the list of context words if my center word is ‘sudoku’?
Regards,
Azim
I don’t know Azim.
You can get the cosine distance between the words, and the one that is having the least distance would surround it.. here is the link:
Go to Natural Language Processing and you can find a cosine function there, use them to find yours..
Hi Jason,
Thanks for your useful blog I have learned a lots.
I am wondering if I already have pretrained word embedding, is that possible to set keras embedding layer trainable to be true? If it is workable, will I get a better result, when I only use small size of data to pretrain the word embedding model. Many thanks!
You can. It is hard to know whether it will give better results. Try it.
Hey Jason,
Is it possible to perform probability calculations on the label? I am looking at a case where it is not simply +/- but that a given data entry could be both but more likely one and not the other.
Yes, a neural network with a sigmoid or softmax output can predict a probability-like score for each class.
I’m doing something like this except with my own feature vectors — but to the point of the labels — I do ternary classification using categorical_crossentropy and a softmax output. I get back an answer of the probability of each label.
Nice!
Hey Jason!
Thanks for a wonderful and detailed explanation of the post. It helped me a lot.
However, I’m struggling to understand how the model predicts a sentence as positive or negative.
i understand that each word in the document is converted into a word embedding, so how does our model evaluate the entire sentence as positive or negative? Does it take the sum of all the word vectors? Perhaps average of them? I’ve not been able to figure this part out.
Great question!
The model interprets all of the words in the sequence and learns to associate specific patterns (of encoded words) with positive or negative sentiment
Hi Jason,
Thanks a lot for your amazing posts. I have the same question as Ravil. Can you elaborate a bit more on “learns to associate specific patterns?”
Good question Ken, perhaps this post will make it clearer how ml algorithms work (a functional mapping):
Does that help?
Thanks for your reply. But I was trying to ask is that how does keras manage to produce a document level representation by having the vectors of each word? I don’t seem to find how was this being done in the code.
Cheers.
The model such as the LSTM or CNN will put this together.
In the case of LSTMs, you can learn more here:
Does that help?
Hi Jason,
First, thanks for all your really useful posts.
If I understand well your post and answers to Ken and Ravil, the neural network you build in fact reduces the sequence of embedding vectors corresponding to all the words of a document to a one-dimensional vector with the Flatten layer, and you just train this flattening, as well as the embedding, to get the best classification on your training set, isn’t it?
Thank you in advance for your answer.
Sort of.
words => integers => embedding
The embedding has a vector per word which the network will use as a representation for the word.
We have a sequence of vectors for the words, so we flatten this sequence to one long vector for the Dense model to read. Alternately, we could wrap the dense in a timedistributed layer.
Aaah! So nothing tricky is done when flattening, more or less just concatenating the fixed number of embedding vectors that is the output of the embedding layer, and this is why the number of words per document has to be fixed as a setting of this layer. If this is correct, I think I’m finally understanding how all this works.
I’m sorry to bother you more, but how does the network works if a document much shorter than the longest document (the number of its words being set as the number of words per document to the embedding layer) is given to the network as training or testing? It just fills the embedding vectors of this non-appearing words as 0? I’ve been looking for ways to convert all the word embeddings of a text to some sort of document embedding, and this just seems a solution too simple to work, or that may work but for short documents (as well as other options like averaging the word vectors or taking the element-wise maximum of minimum).
I’m trying to do sentiment analysis for spanish news, and I have news with like 1000 or more words, and wanted to use pre-trained word embeddings of 300 dimensions each. Wouldn’t it be a size way too huge per document for the network to train properly, or fast enough? I imagine you do not have a precise answer, but I’d like to know if you have tried the above method with long documents, or know that someone has.
Thank you again, I’m sorry for such a long question.
Yes.
We can use padding for small documents and a Masking input layer to ignore padded values. More here:
Try different sized embeddings and use results to guide the configuration.
Okay, thank you very much! I will give it a try.
the chinese word how to vector sequence
Sorry?
me bot trying interact comments born with lstm
Hi Jason,
I have successfully trained a model using the word embedding and Keras. The accuracy was at 100%.
I saved the trained model and the word tokens for predictions.
MODEL.save(‘model.h5’, True)
TOKENIZER = Tokenizer(num_words=MAX_NB_WORDS)
TOKENIZER.fit_on_texts(TEXT_SAMPLES)
pickle.dump(TOKENIZER, open(‘tokens’, ‘wb’))
When predicting:
– Load the saved model.
– Setup the tokenizer, by loading the saved word tokens.
– Then predict the category of the new data.
I am not sure the prediction logic is correct, since I am not seeing the expected category from the prediction.
The source code is in Github:
Appreciate if you can have a look and let me know what I am missing.
Best regards,
Hilmi.
Your process sounds correct. I cannot review your code sorry.
What was the problem exactly?
Thank you, Jason! Your examples are very helpful. I hope to get your attention with my question. At training, you prepare the tokenizer by doing:
t = text.Tokenizer();
t.fit_on_texts(docs)
Which creates a dictionary of words:numbers. What do we do if we have a new doc with lost of new words at prediction time? Will all these words go the unknown token? If so, is there a solution for this, like can we fit the tokenizer on all the words in the English vocab?
You must know the words you want to support at training time. Even if you have to guess.
To support new words, you will need a new model.
Hello Jason!
In Example of Using Pre-Trained GloVe Embedding, do you use the word embedding vectors as weights of the embedding layer?
Yes.
Very nice set of Blogs of NLP and Keras – thanks for writing them.
As a quick note for others
When I tried to load the glove file with the line:
f = open(‘../glove_data/glove.6B/glove.6B.100d.txt’)
I got the error
UnicodeDecodeError: ‘charmap’ codec can’t decode byte 0x9d in position 2776: character maps to
To fix I added:
f = open(‘../glove_data/glove.6B/glove.6B.100d.txt’,encoding=”utf8″)
This issue may have been caused by using Windows.
Thanks for the tip Alex.
Hi Jason,
Wonderful tutorials!
I have a question. Why do we have to one-hot vectorize the labels? Also, if I have a pad sequence of ex. [2,4,0] what the one hot will be? I’m trying to understand better one hot vectorzer.
I appreciate your response!
We don’t one hot encode the labels, we one hot encode the words.
Perhaps this post will help you better understand one hot encoding:
Hi Jason,
Thank you for your excellent tutorial. Do you know if there is a way to build a network for a classification using both text embedded data and categorical data ?
Thank you
Sure, you could have a network with two inputs:
Thank you Jason
How to do sentence classification using CNN in keras ? please help
See this tutorial:
Fantastic explanation, thanks so much. I’m just amazed at how much easier this has become since the last time I looked at it.
I’m glad the post helped Stuart!
Hi Jason …the 14 word vocab from your docs is “well done good work great effort nice excellent weak poor not could have better” For a vocab_size of 14, this one_shot encodes to [13 8 7 6 10 13 3 6 10 4 9 2 10 12]. Why does 10 appear 3 times, for “great”, “weak” and “have”?
Sorry, I don’t follow Stuart. Could you please restate the question?
Hi Jason, the encodings that I provided in the example above came from kerasR with a vocab_size of 14. So let me ask the same question about the uniqueness of encodings using your Part 3 one_hot example above with a vocab_size of 50.
Here different encodings are produced for different new kernels (using Spyder3/Python 3.4):
[[31, 33], [27, 33], [48, 41], [34, 33], [32], [5], [14, 41], [43, 27], [14, 33], [22, 26, 33, 26]]
[[6, 21], [48, 44], [7, 26], [46, 44], [45], [45], [10, 26], [45, 48], [10, 44], [47, 3, 21, 27]]
[[7, 8], [16, 42], [24, 13], [45, 42], [23], [17], [34, 13], [13, 16], [34, 42], [17, 31, 8, 19]]
Pleas note that in the first line, “33” encodes for the words “done”, “work”, “work”, “work” & “done”. In the second line “45” encodes for the words “excellent” & “weak” & “not”. In the third line, “13” encodes “effort”, “effort” & “not”.
So I’m wondering why the encodings are not unique? Secondly, if vocab_size must be much larger then the actual size of the vocabulary?
Thanks
The one_hot() function does not map words to unique integers, it uses a hash function that can have collisions, learn more here
Thanks Jason, In your Part 4 example, the Tokenizer approach always gives the same encodings and these appear to be unique.
Yes, I recommend using Tokenizer, see this post for more:
Great article Jason.
How do you convert back from an embedding to a one-hot? For example if you have a seq2seq model, and you feed the inputs as word embeddings, in your decoder you need to convert back from the embedding to a one-hot representing the dictionary. If you do it by using matrix multiplication that can be quite a large matrix (e.g embedding size 300, and vocab of 400k).
The output layer can predict integers directly that you can map to words in your vocabulary. There would be no embedding layer on the output.
Hi,
Very helpful article.
I have created word2vec matrix of a sentence using gensim and pre-trained Google News vector. Can I just flatten this matrix to a vector and use that as a input to my neural network.
For example:
each sentence is of length 140 and I am using a pre-trained model of 100 dimensions, therefore:- I have a 140*100 matrix representing the sentence, can i just flatten it to a 14000 length vector and feed it to my input layer?
It depends on what you’re trying to model.
Great article, could you shed some light on how do Param # of 400 and 1500 in two neural networks come from? Thanks
Oh! Is it just vocab_size * # of dimension of embedding space?
1. 50 * 8 = 400
2. 15* 100 = 1500
What do you mean exactly?
Great post! I’m working with my own corpus. How would I save the weight vector of the embedding layer in a text file like the glove data set?
My thinking is it would be easier for me to apply the vector representations to new data sets and/or machine learning platforms (mxnet etc) and make the output human readable (since the word is associated with the vector).
You could use get_weights() in the Keras API to retrieve the vectors and save directly as a CSV file.
get_weights() for what exactly? does it need a loop?
get_weights is a function that will return the weights for a model or layers, depending on what you call it on exactly:
Clear Short Good reading, always thank you for your work!
Thanks.
Hello Jason,
I have a dataset with which I’ve attained 0.87 fscore by 5 fold cross validation using SVM.Maximum context window is 20 and one hot encoded.
Now, I’ve done whatever has been mentioned and getting an accuracy of 13-15 percent for RNN models where each one has one LSTM cell with 3,20,150,300 hidden units. Dimensions of my pre-trained embeddings is 300.
Loss is decreasing and even reaching negative values, but no change in accuracy.
I’ve tried the same with your CNN,basic ANN models you’ve mentioned for text classification .
Would you please suggest some solution. Thanks in advance.
I have some ideas here:
When I copy the code of the first box I get the error:
AttributeError: ‘int’ object has no attribute ‘ndim’
in the line :
model.fit(padded_docs, labels, epochs=50, verbose=0)
Where is the problem?
Copy the the code from the “complete example”.
Hi Jason,
I’ve got the same error, also will running the “compete example”.
What can be the cause?
Try casting the labels to numpy arrays.
i get the same!
I have fixed and updated the examples.
Carsten, you need labels to be numpy.array not just list.
Hi Jason,
If I have unkown words in training set, how can I assign the same random initialize vector to all of the unkown words when using pretrained vector model like glove or w2v. thanks!!!
Why would you want to do that?
If my data is in specific domain and I still want to leverage general word embedding model(e.g. glove.6b.100d trained from wiki), then it must has some OOV in domain data, so. no no mather in training time or inference time it propably may appear some unkown words.
It may.
You could ignore these words.
You could create a new embedding, set vectors from existing embedding and learn the new words.
Amazing Dr. Jason!
Thanks for a great walkthrough.
Kindly advice on the following.
On the step of encoding each word to integer you said: “We could experiment with other more sophisticated bag of word model encoding like counts or TF-IDF”. Could you kindly elaborate on how can it be implemented, as tfidf encodes tokens with floats. And how to tie it with Keras, passing it to an Embedding layer please? I’m keen to experiment with it, hope it could yield better results.
Another question is about input docs. Suppose I’ve preprocessed text by the means of nltk up to lemmatization, thus, each sample is a list of tokens. What is the best approach to pass it to Keras embedding layer in this case then?
I have most posts on BoW, learn more here:
You can encode your tokens as integers manually or use the Keras Tokenizer.
Well, Keras Tokenizer can accept only texts or sequences. Seems the only way is to glue tokens together using ‘ ‘.join(token_list) and then pass onto the Tokenizer.
As for the BOW articles, I’ve walked through it theys are so very valuable. Thank you!
Using BOW differs so much from using Embeddings. As BOW would introduce huge sparse array of features for each sample, while Embeddings aim to represent those features (tokens) very densely up to a few hundreds items.
So, BOW in the other article gives incredibly good results with just very simple NN architecture (1 layer of 50 or 100 neurons). While I struggled to get good results using Embeddings along with convolutional layers…
From your experience, would you please advice on that please? Are Embeddings actually viable and it is just a matter of finding a correct architecture?
Nice! And well done for working through the tutorials. I love to see that and few people actually “do the work”.
Embeddings make more sense on larger/hard problems generally – e.g. big vocab, complex language model on the front end, etc.
I see, thank you.
Thanks jason for another great tutorial.
I have some questions :
Isn’t one hot definition is binary one, vector of 0’s and 1?
so [[1,2]] would be encoded to [[0,1,0],[0,0,1]]
how embedding algorithm is done on keras word2vec/globe or simply dense layer(or something else)
thanks
joseph
Sorry, I don’t follow your question. Perhaps you could rephrase it?
Amazing Dr. Jason!
Thanks for a great walkthrough.
The dimension for each word vector like above example e.g. 8, is set randomly?
Thank you
The dimensionality is fixed and specified.
In the first example it is 8, in the second it is 100.
Thank you Dr. Jason for your quick feedback!
Ok, I see that the pre-trained word embedding is set to 100 dimensionality because the original file “glove.6B.100d.txt” contained a fixed number of 100 weights for each line of ASCII.
However, the first example as you mentioned in here, “The Embedding has a vocabulary of 50 and an input length of 4. We will choose a small embedding space of 8 dimensions.”
You choose 8 dimensions for the first example. Does it means it can be set to any numbers other than 8? I’ve tried to change the dimension to 12. It doesn’t appeared any errors but the accuracy drops from 100% to 89%
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 4, 12) 600
_________________________________________________________________
flatten_1 (Flatten) (None, 48) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 49
=================================================================
Total params: 649
Trainable params: 649
Non-trainable params: 0
Accuracy: 89.999998
So, how dimensionality is set? Does the dimensions effects the accuracy performance?
Sorry I am trying to grasp the basic concept in understanding NLP stuff. Much appreciated for your help Dr. Jason.
Thank you
No problem, please ask more questions.
Yes, you can choose any dimensionality you like. Larger means more expressive, required for larger vocabs.
Does that help Anna?
Yes indeed Dr. Now I can see that the dimensionality is set depends on the number of vocabs.
Thank you again Dr Jason for enlighten me! 🙂
You asked good questions.
Hi Jason,
thanks for amazing tutorial.
I have a question. I am trying to do semantic role labeling with context window in Keras. How can I implement context window with embedding layer?
Thank you
I don’t know, sorry.
Hi, great website! I’ve been learning a lot from all the tutorials. Thank you for providing all these easy to understand information.
How would I go about using other data for the CNN model? At the moment, I am using just textual data for my model using the word embeddings. From what I understand, the first layer of the model has to be the Embeddings, so how would I use other input data such as integers along with the Embeddings?
Thank you!
Great question!
You can use a multiple-input model, see examples here:
Thank you for the fast reply!
Hi Jason, this tutorial is simple and easy to understand. Thanks.
However, I have a question. While using the pre-trained embedding weights such as Glove or word2vec, what if there exists few words in my dataset, which weren’t present in the dataset on which word2vec or Glove was trained. How does the model represent such words?
My understanding is that in your second section (Using Pre-Trained Glove Embeddin), you are mapping the words from the loaded weights to the words present in your dataset, hence the question above.
Correct me, if it’s not the way I think it is.
You can ignore them, or assign them to zero vector, or train a new model that includes them.
Hi Jason. Thanks for this and other VERY clear and informative articles.
Wanted to add one question to Aditya post:
“mapping the words from the loaded weights to the words present in your dataset”
how does it mapping? does it use matrix index == word number from (padded_docs or)?
I am asking because – what if I pass embedding_matrix with origin order, but will shuffle padded_docs before model.fit?
Words must be assigned unique integers that remain consistent across all data and embeddings.
Hi Jason,
I am trying to train a Keras LSTM model on some text sentiment data. I am also using GridSearchCV in sklearn to find the best parameters. I am not quite sure what went wrong but the classification report from sklearn says:
UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
Below is what the classification report looks like:
precision recall f1-score support
negative 0.00 0.00 0.00 98
positive 0.70 1.00 0.83 232
avg / total 0.49 0.70 0.58 330
Do you know what the problem is?
Perhaps you are trying to use keras metrics with sklearn? Watch the keywords you use when specifying the keras model vs the sklearn evaluation (CV).
Hi Jason,
your blog is really really interesting. I have a question: Which is the diffrence between using word2vec and texts_to_sequences from Tokenizer in keras? I mean in the way the texts are represented.
Is any of the two options better than the other?
Thanks a lot.
Kind regards.
word2vec encodes words (integers) to vectors. texts_to_seqences encodes words to integers. It is a step before word2vec or a step before bag of words.
Hi Jason,
I have a dataframe which contains texts and corresponding labels. I have used gensim module and used word2vec to make a model from the text. Now I want to use that model for input into Conv1D layers. Can you please tell me how to load the word2vec model in Keras Embedding layer? Do I need to pre-process the model in some way before loading? Thanks in advance.
Yes, load the weights into an Embedding layer, then define the rest of your network.
The tutorial above will help a lot.
This is really helpful. You make us awesome at what we do. Thanks!!
I’m glad to hear that.
Thank you for this extremely helpful blog post. I have a question regarding to interpreting the model. Is there a way to know / visualize the word importance after the model is trained? I am looking for a way to do so. For instance, is there a way to find like the top 10 words that would trigger the model to classify a text as negative and vice versa? Thanks a lot for your help in advance
There may be methods, but I am not across them. Generally, neural nets are opaque, and even weight activations in the first/last layers might be misleading if used as importance scores.
Maybe look at Lime
Thanks.
Hi Jason ,
Can you please tell me the logic behind this:
vocab_size = len(t.word_index) + 1
Why we added 1 here??
So that the word indexes are 1-offset, and 0 is reserved for padding / no data.
Hi Jason,
If I want to use this model to predict next word, can I just change the output layer to Dense(100, activation = ‘linear’) and change the loss function to MSE?
Many thanks,
Ray
Perhaps look at some of the posts on training a language model for word generation:
Thanks for this tutoriel ! Really clear and usefull !
You’re welcome.
Hi Jason,
U R the best in keras tutorials and also replying the questions. I am really grateful.
Although I have understood the content of the context and codes U have written above, I am not able to understand what you mean about this sentence:[It might be better to filter the embedding for the unique words in your training data.].
what does “to filter the embedding” mean??
Thank you for replying.
It means, only have the words in the embedding that you know exist in your dataset.
Hi Jason,
Thank U 4 replying but as I am not a native English speaker, I am not sure whether I got it or not. You mean to remove all the words which exist in the glove but do not exist in my own dataset?? in order to raise the speed of implementation?
I am sorry to ask it again as I did not understand clearly.
Thank U in advance Jason
Exactly.
Hi,
This post is great. I am new to machine learning so i have a question which might be basic so i am not sure.As from what i understand, the model takes the embedding matrix and text along with labels at once.What i am trying to do is concatenate POS tag embedding with each pre-trained word embedding but POS tag can be different for the same word depending upon the context.It essentially means that i cant alter the embedding matrix at add to the network embedding layer.I want to take each sentence,find its embedding and concatenate with POS tag embedding and then feed into neural network.Is there a way to do the training sentence by sentence or something? Thanks
You might be able to use the embedding and pre-calculate the vector representations for each sentence.
Sorry but i didn’t quite understand.Can you please elaborate a little?
Sorry, I mean that you can prepare an word2vec model standalone. Then pass each sentence through it to build up a list of vectors. Concat the vectors together and you have a distributed sentence representation.
Thanks alot! One more thing, is it possible to pass other information to the embedding layer than just weights?For example i was thinking that what if i dont change the embedding matrix at all and create a separate matrix of POS tags for whole training data which is also passed to the embedding layer which concatenates these both sequentially?
You could develop a model that has multiple inputs, for example see this post for examples:
Thanks.I saw this post.Your model has separate inputs but they get merged after flattening.In my case i want to pass the embeddings to first convolutional layer,only after they are concatenated. Uptil now what i did was that i have created another integerized sequence of my data according to POS_tags(embedding_pos) to pass as another input and another embedding matrix that contains the embeddings of all the POS tags.
e=(Embedding(vocab_size, 50, input_length=23, weights=[embedding_matrix], trainable=False))
e1=(Embedding(38, 38, input_length=23, weights=[embedding_matrix_pos], trainable=False))
merged_input = concatenate([e,e1], axis=0)
model_embed = Sequential()
model_embed.add(merged_input)
model_embed.fit(data,embedding_pos, final_labels, epochs=50, verbose=0)
I know this is wrong but i am not sure how to concat those both sequences and if you can direct me in right direction,it would be great.The error is
‘Layer concatenate_6 was called with an input that isn’t a symbolic tensor. Received type: . Full input: [, ]. All inputs to the layer should be tensors.’
Perhaps you could experiment and compare the performance of models with different merge layers for combining the inputs.
Hi Jason, awesome post as usual!
Your last sentence is tricky though. You write:
“In practice, I would encourage you to experiment with learning a word embedding using a pre-trained embedding that is fixed and trying to perform learning on top of a pre-trained embedding.”
Without the original corpus, I would argue, that’s impossible.
In Google’s case, the original corpus of around 100 billion words is not publicly available. Solution? I believe you’re suggesting “Transfer Learning for NLP.” In this case, the only solution I see is to add manually words.
E.g. you need ‘dolares’ which is not in Google’s Word2Vec. You want to have similar vectors as ‘money’. In this case, you add ‘dolares’ + 300 vectors from money. Very painful, I know. But it’s the only way I see to do “Transfer Learning with NLP”.
If you have a better solution, I’d love your input.
Cheerio, a big fan
Not impossible, you can use an embedding trained on a other corpus and ignore the difference or fine tune the embedding while fitting your model.
You can also add missing words to the embedding and learn those.
Remember, we want a vector for each word that best captures their usage, some inconsistencies does not result in a useless model, it is not binary useful/useless case.
Thank you very much for the detailed answer!
The link for the Tokenizer API is this same webpage. Can you update it please?
Fixed, thanks.
Hi Jason, great post!
I have successfully trained my model using the word embedding and Keras. I saved the trained model and the word tokens.Now in order to make some predictions, do i have to use the same tokenizer with one that i used in the training?
Correct.
Thank you very much!
Hi Jason, when i was looking for how to use pre-trained word embedding,
I found your article along with this one:
They have many similarities.
Glad to hear it.
Hey jason,
I am trying to do this but sometime keras gives the same integer to different words. Would it be better to use scikit encoder that converts words to integers?
This might happen if you are using a hash encoder, as Keras does, but calls it a one hot encoder.
Perhaps try a different encoding scheme of words to integers
Hi Jason,
I have implemented the above tutorial and the code works fine with GloVe. I am so grateful abot the tutorial Jason.
but when I download GoogleNews-vectors-negative300.bin which is a pre-trained embedding word2vec it gave me this error:
File “/home/mary/anaconda3/envs/virenv/lib/python3.5/site-packages/gensim/models/keyedvectors.py”, line 171, in __getitem__
return vstack([self.get_vector(entity) for entity in entities])
TypeError: ‘int’ object is not iterable.
I wrote the code as the same as your code which you wrote for loading glove but with a little change.
‘model = gensim.models.KeyedVectors.load_word2vec_format(‘./GoogleNews-vectors-negative300.bin’, binary=True)
for line in model:
values = line.split()
word = values[0]
coefs = asarray(values[1:], dtype=’float32′)
embeddings_index[word] = coefs
model.close()
print(‘Loaded %s word vectors.’ % len(embeddings_index))
embedding_matrix = zeros((vocab_dic_size, 300))
for word in vocab_dic.keys():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[vocab_dic[word]] = embedding_vector
I saw you wrote a tutorial about creating word2vec by yourself in this link “”,
but I have not seen a tutorial about aplying pre-trained word2vec like GloVe.
please guide me to solve the error and how to apply the GoogleNews-vectors-negative300.bin pretrained wor2vec?
I am so sorry to write a lot as I wanted to explain in detail to be clear.
any guidance will be appreciated.
Best
Meysam
Perhaps try the text version as in the above tutorial?
Hi Jason
thank you very much for replying, but as I am weak at the English language, the meaning of this sentence is not clear. what do you mean by “try the text version”??
in fact, GloVe contains txt files and I implement it correctly but when I wanna run a program by GoogleNews-vectors-negative300.bin which is a pre-trained embedding word2vec it gave me the error and also this file is a binary one and there is no pre-trained embedding word2vec file by txt prefix.
can you help me though I know you are busy?
Best
Meysam
You are using the binary version of the glove file (.bin). Try downloading and using the text version instead.
You can get .txt versions here:
How can we use pre-trained word embedding on mobile?
I don’t see why not, other than disk/ram size issues.
Great post! What changes are necessary if the labels are more than binary such as with 4 classes:
labels = array([2,1,1,1,2,0,-1,0,-1,0])
?
E.g. instead of ‘binary_crossentropy’ perhaps ‘categorical_crossentropy’?
And how shold the Dense layer change?
If I use: model.add(Dense(4, activation=’sigmoid’)), I get an error:
ValueError: Error when checking target: expected dense_1 to have shape (None, 4) but got array with shape (10, 1)
thanks for your work!
I believe this will help:
thanks! also using keras’s to_categorical to discretize the labels was necessary.
one more question: is there a simply way to create the Tokenizer() instance, fit it, save it, and then extend it on new documents? Specifically, so that t.fit_on_texts( ) can be updated on new data.
I’m not so sure that you can.
It might be easier to manage the encoding/mapping yourself so that you can extend it at will.
Nice.
Hi Jason,
For starters, thanks for this post. Ideal to get things going quickly. I have a couple of questions if you don’t mind:
1) I don’t think that one-hot encoding the string vectors is ideal. Even with the recommended vocab size (50), I still got collisions which defeats the purpose even in a toy example such as this. Even the documentation states that uniqueness is not guaranteed. Keras’ Tokenizer(), which you used in the pre-trained example is a more reliable choice in that no two words will share the same integer mapping. How come you proposed one-hot encoding when Tokenizer() does the same job better?
2) Getting Tokenizer()’s word_index property, returns the full dictionary. I expected the vocab_size to be equal to len(t.word_index) but you increment that value by one. This is in fact necessary because otherwise fitting the model fails. But I cannot get the intuition of that. Why is the input dimension size equal to vocab_size + 1?
3) I created a model that expects a BoW vector representation of each “document”. Naturally, the vectors were larger and sparser [ (10,14) ] which means more parameters to learn, no. However, in your document you refer to this encoding or tf-idf as “more sophisticated”. Why do you believe so? With that encoding don’t you lose the word order which is important to learn word embeddings? For the record, this encoding worked well too but it’s probably due to the nature of this little experiment.
Thank you in advance.
The keras one hot encoding method really just takes a hash. It is better to use a true one hot encoding when needed.
I do prefer the Tokenizer class in practice.
The words are 1-offset, leaving room for 0 for “unknown” word.
tf-idf gives some idea of the frequency over the simple presence/absence of a word in BoW.
How that helps.
where can i find the file “../glove_data/glove.6B/glove.6B.100d.txt”??because i come up with the following error.
File “”, line 36
f = open(‘../glove_data/glove.6B/glove.6B.100d.txt’)
^
SyntaxError: invalid character in identifier
You must download it and place it in your current working directory.
Perhaps re-read section 4.
I have placed the code and dataset in same directory.what’s wrong with the code??
f = open(‘glove.6B/glove.6B.100d.txt’)
I am facing the following error.
File “”, line 36
f = open(‘glove.6B/glove.6B.100d.txt’)
^
SyntaxError: invalid character in identifier
Perhaps this will help you when copying code from the tutorial:
Excellent work! This is quite helpful to novice.
And I wonder is this useful to other language apart from English? Since I am a Chinese, and I wonder whether I can apply this to Chinese language and vocabulary.
Thanks again for your time devoted!
I don’t see why not.
Hi,
can you explain how can the word embeddings be given as hidden state input to LSTM?
thanks in advance
Word embeddings don’t have hidden state. They don’t have any state.
for example, I have a word and its 50 (dimensional) embeddings. How can I give these embeddings as hidden state input to an LSTM layer?
Why would you want to give them as the hidden state instead of using them as input to the LSTM?
hi, what a practical post!
I have a question, I work in a sentiment analysis project with word2vec as an embedding model with keras. my problem is when I want to predict a new sentence as an input I face this error:
ValueError: Error when checking input: expected conv1d_1_input to have shape (15, 512) but got array with shape (3, 512)
consider that I want to enter a simple sentence like:”I’m really sad” with the length 3- and my input shape has the length of 15- I don’t know how can I reshape it or doing what to get rid of this error.
and this is the related part of my code:
model = Sequential()
model.add(Conv1D(32, kernel_size=3, activation=’elu’, padding=’same’, input_shape=(15, 512)))
model.add(Conv1D(32, kernel_size=3, activation=’elu’, padding=’same’))
model.add(Conv1D(32, kernel_size=3, activation=’elu’, padding=’same’))
model.add(Conv1D(32, kernel_size=3, activation=’elu’, padding=’same’))
model.add(Dropout(0.25))
model.add(Conv1D(32, kernel_size=2, activation=’elu’, padding=’same’))
model.add(Conv1D(32, kernel_size=2, activation=’elu’, padding=’same’))
model.add(Conv1D(32, kernel_size=2, activation=’elu’, padding=’same’))
model.add(Conv1D(32, kernel_size=2, activation=’elu’, padding=’same’))
model.add(Dropout(0.25))
model.add(Dense(256, activation=’relu’))
model.add(Dense(256, activation=’relu’))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(2, activation=’softmax’))
would you please help me to solve this problem?
You must prepare the new sentence in exactly the same way as the training data, including length and integer encoding.
At least would you mind sharing some suitable source for me to solve this problem please?
I hope you answer my question as what you done all the time. Thanks
I’m eager to help and answer specific questions, but I don’t have the capacity to write code for you sorry.
Hi Jason,
I have two questions. First, for new instances if the length is greater than this model input shall we truncate the sentence?
Also, since the embedding input is just for the seen training set words, what happens to the predictions-to-word process? I assume it just returns some words similar to the training words not considering all the dictionary words. Of course I am talking about a language model with Glove.
Yes, truncate.
Unseen words are mapped to nothing or 0.
The training dataset MUST be representative of the problem you are solving.
From what i understood from this comment,it is about prediction on test data. Lets assume that there are 50 words in vocabulary which means sentences will have unique integers uptil 50. Now since test data must be tokenized with same instance of tokenizer and if it has some new words, it would have integers with 51 ,52 oand so on..In this case,would the model automatically use 0 for word embeddings or can it raise out of bound type exception?Thanks
You would encode unseen words as 0.
All 50 vocabulary words should start from index 1 to 50, while leave 0 for unseen word in vocabulary. am I right?
Correct.
can this be described as transfer learning?
Perhaps.
Hi,
i have trained and tested my own network. During my work,when i integerized the sentences and created a corresponding word embedding matrix ,it included embeddings for train,validation and test data as well.
Now if i want to reload my model to test for some other similar data, i am confused that how the words from this new data would relate to embedding matrix?
You should have embeddings for test data as well right? or when you create embedding matrix you exclude test data?Thanks
The embedding is created from the training dataset.
It should be sufficiently rich/representative enough to cover all data you expect to in the future.
New data must have the same integer encoding as the training data prior to being mapped onto the embedding when making a prediction.
Does that help?
yes,i understand that i should be using the same tokenizer object for encoding both train and test data, but i am not sure how the embedding layer would behave for the word or index which isnt part of embedding matrix. Obviously test data would have similar words but there must be some words that are bound to be new. Would you say this approach is right to include test data too while creating embedding matrix for model? If you want to predict using some pre trained model,how can i deal with this issue? A small example can be really helpful. Thanks alot for all the help and time!
It won’t. The encoding will set unknown words to 0.
It really depends on the goal of what you want to evaluate.
i want to train my model to predict the target word given to a 5 word sequence . how can i represent my target word ?
Probably using a one hot encoding.
Hello Jason,
This is regarding the output shape of the first embedding layer : (None,4,8).
Am I correct in understanding that the 4 represents the input size which is 4 words and the 8 is the number of features it has generated using those words?
I believe so.
Hi Jason,
Thanks for sharing your knowledge.
My task is to classify set of documents into different categories.( I have a training set of 100 documents and say 10 categories).
The idea is to extract top M words ( say 20) from the first few lines of each doc, convert words to word embeddings and use it as feature vectors for the neural network.
Question : Since i take top M words from the document, it may not be in the “right” order each time, meaning the there can be different words at a given position in the input layer ( unlike bag of words model). Wont this approach impact the Neural network from converging?
Regards,
Srini
The key is to assign the same integer value to each word prior to feeding the data into the embedding.
You must use the same text tokenizer for all documents.
Hi Jason,
Thank you for your great explanation. I have used the pre-trained google embedding matrix in my seqtoseq project by using encoder-decoder. but in my test, I have a problem. I don’t know how to make a reverse for my embedding matrix. Do you have a sample project? My solution is: when my decoder predicts a vector, I should search for that in my pre-trained embedding matrix, and then find its index and then understand its related word. Am I right?
Why would you need to reverse it?
Hi Jason
Thanks for an excellent tutorial. Using your methods, I’ve converted text into word index and applied word embeddings.
Like Fatemeh, I’m wondering if it’s possible to reverse the process, and convert embedding vectors back into text? This could be useful for applications such as text summarising.
Thank you.
Yes, each vector has an int value known by the embedding and each int has a mapping to a word via the tokenizer.
Random vectors in the space do not, you will have to find the closest vector in the embedding using euclidean distance.
Dear Dr. Jason,
Accuracy: 89.999998 on my Laptop, result different from computer to other?
Well done!
Results are different each run, learn more here:
Hi,
So many thanks for this tutorial!
I’ve been trying to train a network that consists of an Embedding layer, an LSTM and a softmax as the output layer. However, it seems that the loss and accuracy get stuck at some point.
Do you have any advice?
Thanks in advance.
Yes, I have a ton of advice right here:
Thank you so much,
It helped me alot in learning how to use pre trained embbeding in neural nets
I’m happy to hear that.
Hi Jason, thank uou for the great materiAl.
I have one doubt, want to make the embedding of a list of 1200 documents to use it as input to a classification model to predict moviebox office based on the moviescript text…
My question is… if i want to train the embedding with the vocabulary of the real dataset, how can i after classify the rest of the dataset that was not trained ? Can a use the embeddings learned on the training as input to the classification model ?
Good question.
You must ensure that the training dataset is representative of the broader problem. Any words unseen during training will be marked zero (unknown) by the tokenizer unless you update your model.
Thank You Jason. As soon as I get the results I’ll try to share it here.
I’d like to thank you too about your great platform, it is being very helpful to me.
You’re welcome.
Nice post once again! It seems that in each batch all embedding are updated which I think it should not happen. You got any idea how to update only the one that are passed each time? That is for computational reasons or others problem definitions related reasons.
I’m not sure what you mean exactly, can you elaborate?
Hello Jason, i would like to think you for this post, it’s really interresting and understandable.
I’ve reused the script but instead of using “docs” and “labels” lists, i used the IMDB movie reviews dataset. The problem is that i can’t reach more than 50% accuracy and the loss is stable in all epochs to value 0.6932.
What do you think about that ?
I have some suggestions here:
Okay I’ll check it out, thank you Jason
Thanks for the article. Could you also provide an example of how to train a model with only one Embedding layer? I’m trying to do the same with Keras but the problem is that the fit method asks for labels which I don’t have. I mean I only have a bunch of text files that I’m trying to come up with the mapping for.
Models typically only have one embedding layer. What do you mean exactly?
Hello,
Thank you for the excellent explanation!
I have a few questions related to unknown words.
Some pretrained word embeddings like the GoogleNews embeddings have an embedding vector for a token called ‘UNKNOWN’ as well.
1. Can I use this vector to represent words that are not present in the training set instead of the vector of all zeros? If so, how should I go about loading this vector into the Keras Embedding layer? Should it be loaded at the 0th index in the embedding matrix?
2. Also, can I use the Tokenizer API to help me convert all unknown words (words not in the training set) to ‘UNKNOWN’?
Thank you.
Yes, find the integer value for the unknown word and use that to assign to all words not in the vocab.
Hi,
If word embedding doesn’t contain a word we input to a model , How to address this issue?
1) Is it possible to load additional words (besides those in our vocabulary) in embedding matrix.
Or may be any other elegant way you would like to suggest?
It is marked as “unknown”.
Hi .thanks a lot for your post . i’m new in python and deep learning !
i have 240,000 tweet train set “50 % male and 50% female” class . and 120,000 tweet test set ” 50 % male and 50% female”. i want use lstm in python bud i have following error at ” fit ” method :
ValueError: Error when checking input: expected lstm_16_input to have 3 dimensions, but got array with shape (120000, 400)
can you help me?
It looks like a mismatch between your data and the model, you can change the data or change the model.
Hi Jason, Thanks for this article.
I am getting this error
TypeError: ‘OneHotEncoder’ object is not callable
How oto overcome?
Thanks
I have some suggestions here:
Hi , i have 2 models with this embedding layers , how do i merge those model ?
Thanks
What do you mean exactly? An ensemble model?
hi Jason, greate tutorial, i am very new to all this. I have a query, u r using glove for the embedding layer but during fitting u are directly using padded_docs. The vectors in padded_docs have no co-relation to glove. I am sure that i am missing something plz enlighten.
The padding just adds ‘0’ to ensure the sequences are the same length. It does not effect the encoding.
Hi, Jason. Considering the “3. Example of Learning an Embedding”, I’m adding “model.add(LSTM(32, return_sequences=True))” after the embedding layer and I would like to understand what happens. The number of parameters returned for this LSTM layer is “5248” and I don’t know how to calculate it. Thank you.
Each unit in the LSTM will take the entire embedding as input, therefore must have one weight for each dimension in the embedding.
Hi Jason,
Do you have any example showing how we can use a bi-directional LSTM on text (i.e., using word embeddings)?
It is the same as using the LSTM except, the LSTM is wrapped in a Bidirectional Layer wrapper.
Here’s an example of how to use the Bidirectional wrapper:
I am interested in using the predict function to predict new docs. For instance, ‘Struck out!’ My understanding is that if one or more words in the doc you want to predict weren’t involved in training, then the model can’t predict it. Is the solution to simply train on enough docs to make sure the vocabulary is extensive enough to make new predictions in this way?
Yes, or mark new words as “unknown” a predict time.
Hello Jason, Is there any reason that the output of the Embedding layer is a 4×8 matrix?
No, it is just an example to demonstrate the concept.
Hi, Jason. Thanks a lot for this excellent tutorial. I have a quick question about the Keras Embedding layer.
vocab_size = len(t.word_index) + 1
t.word_index starts from 1 and ends with 15. Therefore, there are totally 15 words in the vocabulary. Then why do we need to add 1 here please?
Thanks a lot for the help!
The words are 1-offset and index 0 is left for “unknown” words.
Hi Jason,
If I have three columns of (string type) multivariate data, one column is categorical, the other two columns are not. Is it ok if I integer encode them using LabelEncoding(), and then scale the encoded data using feature scaling method like MinMax, StandardScaler etc. before feed into anomaly detection model? Even though the ROC shows an impressive result. But is it valid to pre-process text data like that?
Thank you.
Perhaps try it and compare performance.
I have tried it and it shows nearly 100% ROC. What I mean is that, is it accurate to pre-process the text data like that? Because when I checked your post regarding pre-processing text data, there is no feature scaling (MinMax, StandardScaler etc) on text data after encode them to integer. I’m afraid if the way I pre-process data is not accurate.
Generally text is encoded as integers then either one hot encoded or mapped to a word embedding. Other data preparation (e.g. scaling) is not required.
Hello
I was trying to do multi class classification of text data using Keras in R and Python.
In Python I was able to get predicted labels using inverse_transform() method from encoded class values. But when I try to do the same in R using CatEncoders library, getting some of the labels as NAs. Any reason for that.
No need to transform the prediction, the model can make a class or probability prediction directly.
Hi Jason,
Thanks for your sharing! I have a question on word embedding. Correct me if I am wrong: noticed the word embedding created here only contains words in the training/test set. I would think a word embedding including all vocab in GloVE file will be better? For example, if in production, we encounter a new word than in training/test set, but it is part of the GloVE vocab, in this case, we can capture the meaning of the production words although we don’t see it in training/test set. I think this will benefit sentiment classification problems with smaller training set?
Thanks!
Regards
Xiaohong
Generally, you carefully choose your vocab. If you want to maintain a larger vocab than is required of the project “just in case”, go for it.
Jason,
Are there non-text applications of embeddings? For example – I have large sets of categorical variables, each with very large number of levels, which go into a classification model. Could I use embeddings in such a case?
Rahul
Yes, embeddings are fantastic for categorical data!
Hi Jason…
Thanks a lot for such a nice post. It enriched my knowledge a lot.
I have one doubt on text_to_sequence and one_hot methods provided by keras. Both of them are giving same encoded docs with your example. If they gives same output then when we should use text_to_sequence and when we should go for one_hot?
Again thanx a lot for such a nice post.
Use the approach that you prefer.
Hi Jason,
Your post are really superb. Thanks for writing such great post .
I have one query , why people use Embedding Layer when we have already got the vector representation of a word from word2vec or glove . Using these two pre trained model we have already got the same size vector representation of each word and if word in not found we can assign the random value of same size. After getting the vector representation why we are passing to the Embedding layer?
Thanks
Often the learned embedding in the neural net performs better because it is specific to the model and prediction task.
Hi Jason,
Thanks for the reply .
What if I set trainable = False . Then also it is needed to use Embedding layer when I have already vector representation of each word in sequence using word2vec or glove.
Thanks
Yes, you can keep the embedding fixed during training if you like. Results are often not as good in my experience.
Hello Jason, thank you for this very good tutorial. I have a question : I trained your model on the imbd sentiment analysis dataset. The model has 80% of accuracy. But I have very bad embeddings.
First a short detail, I used
one_hotwhich use a hashing trick, but I used the
md5because the default
hashfunction is not consistent across runs, this is mentionned in the doc of Keras (so to save the model and predict new documents it is not good, am I right ?).
But the important thing is that I have very bad embeddings, I created a dict which map lower words and embedding (of size 8). Following this : I didn’t use Glove vector for now.
I tested to search most similar words and I got random words for “good” (holt, christmas, stodgy, artistic, szabo, mandatory…). I set the voc size to 100000. Of course due to the hashing trick, 2 words can have the same index so I don’t take into account similarities of 1.0. I think bad vectors embeddings are due to the fact we train embeddings on entire document and not context like word2vec, what do you think ?
Generally, the embeddings learned by a model work better than pre-fit embeddings, at least in my experience.
Hello, such a great tutorial. All your tutorials are very helpful! Thank you.
I want to find embedings of three different Greek sentences (for classification). Then I want to merge per paired and to fit my model.
I have read your tutorial ‘How to Use the Keras Functional API for Deep Learning’ which is very helpful for the merge.
My question is: Is there any way to calculated before to use as an input to my model? I must have three different models to calculate the embedings?
Thank you in Advance.
Yes, you could prepare the input manually if you wish: each word is mapped to an integer, and the integer will be an index in the embedding. From that you can retrieve the vector for each word and use that as input to a model.
Thank you!!!
Hi Jason,
Thank you so much for this wonderful explanation. After reading many other resources I understand the embedding layers only after reading this. I have few questions and I’d really appreciate if you could take out the time and answer them.
1) In your code, you used a Flatten layer after the Embedding layer and before the Dense layer. In few other places I noticed that a GlobalAveragePooling1D() layer is used in place of the Flattern. Can you explaining what Global Average Pooling does and why it’s used for Text Classification?
2)You explained in of the comments that each word will have only one vector representation before and after the training. So just to confirm, when a word x is inputted to the embedding layer, the output always updates the same vector that represents x? For example, for vocab size 500 and embedding dimension 10, ([500, 10] output shape)if word x is the first vector([0,10]) in the output, every time the word x is inputted the first vector([0, 10]) will be updated and not if the word is not present?
3) What’s the intuition behind choosing the size of the embedding dimension?
Thank you again Jason. Will be waiting for your response.
Mohit
Either approach can be used, both do the same thing. Perhaps try both and see what works best for your specific dataset.
The vectors for a given word are learned. After the model is finished training, the vectors are fixed for each word. If a word is not present during training, the model will not be able to support it.
Embedding must be large enough to capture the relationship between words. Often 100 to 300 is more than enough for a large vocab.
very userful post for beginners. But i have a doubt regarding size of embedding vector. As you mentioned in the post–
. ”
I did not understand why the output from the embedding layer should be 4 vectors. If it is related to input length, please explain me how ?. also the phrase “one for each word” , i did not understand this as well
A good rule of thumb is to start with a length of 50, and try increasing to 100 or 200 and see if it impacts model performance.
Hi Jason, I’m facing touble while using bag of words as features in CNN. Do you have any idea to implement BoW based CNN?
It is not clear to me why/how you would pass a BoW encoded doc to a CNN – as all spatial information would be lost. Sounds like a bad idea.
Hi! Thanks for a good tutorial. I have a question regarding embeddings. How is the performance when using Embedding Layer for training embeddings VS using pre trained embedding? Is it faster and does the model require less time when training if using a pre trained embedding?
Thanks for answer.
Embeddings trained as part of the model seem to perform better in my experience.
Hello, great post!
I want to know more about vocabulary size in Keras Embedding Layer. I am working with Greek and Italian languages. Do you have any scientific paper to suggest?
Thank you very much!
Perhaps test a few different vocab sizes for your dataset and see how it impacts model performance?
Ok, thank you very much!
Hi Jason
What needs to change when using the unsupervised model on pre-train embedding matrix?
also if you want to use week supervised?
You can follow the example here:
Good
Thanks.
Hi Jason,
For one-hot encoder, if we are given the new test set, how can we use one_hot function to have the same matrix space as we had for training set? Since we can not have a separate one_hot encoder for the test set.
Thank you very much.
You can keep the encoder objects used to prepare the data and use them on new data.
Hi,
I ofter see implementations like yours, where the embedding layer is built from a word_index that contains all words build, e.g., with Keras preprocessing Tokenizer API. But if I fit my corpus with the Tokenizer and with vocabulary size limit (with num_words), why should I need an embedding layer of the size of the total number of unique words? Wouldn’t it be a waste of space being used? Is there any issue to build an embedding layer with a size suitable to my vocabulary size limit I need?
Not really, the space is small.
It can be good motivation to ensure your vocab only contains words required to make accurate predictions.
Hi Mr Jason,
Excellent tutorial indeed!
I have a question regarding dimensions of embedding layer , could you please help:
e = Embedding(200, 32, input_length=50)
How do we decide to select size of out_dim which is 32 here? is there any specefic reason for this value?
Thanks in advance:
Leena
You can use trial and error and evaluate how different sized output dimensions impact model performance on your specific dataset.
50 or 100 is a good starting point if you are unsure.
Hi,
i trained my model using word embedding with glove, but kindly let me know how to prepare the test data for predicting results with trained weight. still i did not find any post which follow whole process. specially word embedding with glove
The word embedding cannot predict anything. I don’t understand, perhaps you can elaborate?
Hello, Jason,
thanks for the post. I have a question about the embedding data you actually fit. I print the padded_docs after the model compile. It seems to me that the printed matrix is not an embedding matrix. It’s a integer matrix. So I think what you fit in the CNN is not embedding but the integer matrix you define. Could you please help me explain it? Thanks a lot.
Yes, the padded_docs is integers that are fed to the embedding layer that maps each integer to an 8-element vector.
The values of these vectors are then defined by training the network.
Hi Jason,
I am working on character embedding. My dataset consists of raw HTTP traffic both normal and malicious. I have used the Tokenizer API to integer encode my data with each character having an index assigned to it.
Please let me know if I understood this correctly:
My data is integer encoded to values between 1-55, therefore my input_dim is 55.
I will start with output-dim of 32 and modify this value a needed.
Now for the input_length, I am a bit confused how to set this value.
I have different lengths for the numerical strings in my dataset the longest is 666. Do I set input-length to 666? And if I do this what will happen to the sequences with shorter length?
Thank you for your help!
Also, should I set the input dim to a value higher than 55?
Do you mean word embedding instead of char embedding?
I don’t have any examples of embedding char’s – I’m not sure it would be effective.
In meant character embedding. I used tokenizer and set the character level to True.
I am not sure how to use word embedding for query strings of http traffic when they are not made of real words and just strings of characters.
I am designing a character level neural network for detecting parameter injection in http requests.
The result would be in a binary format 0 if request is normal and 1 if it’s malicious.
So you don’t think character embedding is helpful here?
Sorry, I don’t have an example of a character embedding.
Nevertheless, you should be able to provided strings of integer encoded chars to the embedding in your model. It will look much like an embedding for words, just lower carnality (<100 chars perhaps). Also I don't expect good results.
What problem are you having exactly?
I have found
vocab_size = len(t.word_index) + 1to be wrong. This index not only ignores the
Tokenizer(num_words=Xparameter, but also stores more words than are actually ever going to be encoded.
I fit my text without word limit, and then encode the same text using the tokenizer, and the length of the
word_indexis larger than the
max(max(encoded_texts)).
That is very surprising!
Are you sure there is no bug in your code?
Hi Mario,
Yea. I just noticed the num_words issue with keras Tokenizer today. I think there are already multiple issues regarding it logged on GitHub:
hello jason, how are you?
I am doing my masters thesis on text summarization using word embeddings, and now i am in the middle of many questions, how could I use these features and which neural network alg is best. please could you give some guide….
I believe this will help:
thanks jason…it is really helpful..
I’m happy to hear that.
Hi Jason,
This entire article is very useful. It helped me in writing my initial implementation.
I have one question related to Input() and Embedding() in Keras.
If I already have a pretrained word embedding. In that case, should I use Embedding or Input ?
Yes, the embedding vectors are loaded into the Embedding layer and the layer may be marked as not trainable.
what if in test we have a new words which were not there in the training text ?
It will not proceed from the embedding layer correct?
They will me mapped to 0 or “unknown word”.
Hi, Jason
I want to ask you about how can i save my learned word embedding?
You can use get_weights() on the layer and save the vectors directly as nunmpy arrays if you like.
Or save the whole Keras model:
Can I construct a network for learning the embedding matrix only?
Yes, it is called a word2vec model, here’s how:
Hi,
I have a text similarity application where I measure Pearson correlation coefficient as keras metrics.
In many epochs, I noticed that the correlation value is nan.
Is this is normal or there is a problem in the model?
You may have to debug the model to see what the cause of the NAN is.
Perhaps an exploding gradient or vanishing gradient?
Do you mean that I have to adjust the activation function?
I use elu activation function and Adam optimization function,
Do you mean that I have to change any of them and see the results?
Perhaps.
Try relu.
Try batch norm.
Try smaller learning rate.
…
Can I know what do you mean by debuging the model?
Yes, here are some ideas:
–.
vocab_size = len(t.word_index) + 1
why do we need to increase the vocabulary size by 1???
This is so that we can reserve 0 for unknown words and start known words at 1.
Thanks a lot for this wonderful work .. we’re in July 2019 and still taking advange of it.
Thanks. I try to write evergreen tutorials that remain useful.
Dear Jason,
thanks a lot for your detailed explanations and the code examples.
What I am still wondering about, however, is how I would combine embeddings with other variables, i.e. having a few numeric or categorical variables and then one or two text variables. As I see you input the padded docs (only text) in model.fit. But how would I add the other variables? It doesn’t seem realistic to always only have one text variable as input.
Good question. You can have a separate input to the model for each embedding, and one for other numeric static vars.
This is called a multi-input model.
This will help:
Thank you Jason for this wonderful work and examples. That really help.
My last comment is related to : !
You’re welcome, I’m glad it helped.
Thank u Jason Brownlee, that’s very interesting and clear.
And if i have two inputs?
for example i am working on Text-to-SQL task and that necessit 2 inputs : user question and table schema (columns names).
how can i process? how can i do embeddig? with 2 embeddings layers?
Thank u for help.
You an design a multiple input model, I give examples here:
Ah okay, that’s interesting too, thanks!!
Can you please confirm me the architecture above to encode both user Questions and table Schema in the same model?
(1) userQuestion ==> One-hote-endoding ==> Embedding(GloVe) ==> LSTM
(2) tableShema ==> One-hote-endoding ==> Embedding(GloVe) ==> LSTM
(1) concatenate (merge) (2) ==> rest of model layers….
thakns Jason.
No need for the one hot encoding, the text can be encoded as integers, and the integers mapped to vectors via the embedding.
ok that’s clear, thanks.
The attention mechanism can be done only with the merge?
No, you can use attention on each input if you wish.
Should i do attention on both inputs (user Question & table Schema) separately or can i do it after merging the 2 inputs?
Test many different model types and see what works well for your specific dataset.
okay thank you Jason!!
No problem.
I wonder about pre-trained word2vec, there is no such good tutorial for that. I am looking to implement pre-trained word2vec but I do not know should I follow the same steps of Glove or look for another source for that?
thanks MR.Jason I am very inspired by you in machine learning
I show how to fit a standalone word2vec here:
Why dont you use mask_zero=True ion your embedding layers? It seems necessary since you are padding sentences with 0’s.
Great suggestion, not sure if that argument existed back when I wrote these tutorials. I was using masking input layers instead.
hi, can you help me with a question?
Im working with a dataset that has a city column as a feature and thats has a lot of different cities. So, I create a embeddinglayer for this feature. First, I used this command :
data[‘city’]= data[‘city’].astype(‘category’)
data[‘city’]= data[‘city’].cat.codes
After that, for each different city a value was assigned starting at 0
So, Im confused about how this embedded layer works when the test data has a input that was not training. I saw that you said that when this occurs, we had to put 0 as input, but 0
it’s related with some city. Should i start assigning this values to the city from 1?
Excellent question.
Typically, unseen categories are assigned 0 for “unknown”.
0 should be reserved and real numbering should start at 1.
thank you, you always help me a lot!
You’re very welcome Nathalia!
Hi Jason..
Thanks for such a great tutorial. I am confused on when we talk about learned word embeddings , do we consider weighs of the embedding layer or output of embedding layer.
let me ask in other way as well, when we utilize pretrained embedding let us say “glove.6B.50d.txt” . those word embeddings are weights or the output of the layer?
They are the same thing. Integers are mapped to vectors, those vectors are the output of the embedding.
Hi Jason,
I am new to ML, trying out different things, and your posts are the most helpful I encountered, it helps me a lot to understand, thank you!
Here I think I understood the procedure, but I still have a deeper question, on the point of embeddings. If I understand correctly, this embedding kind of maps a set of words as points onto another dimensionnal space. The surprising fact in your example is that we pass from a space of dimension 4 to a space of dimension 8, so it might not be seen as an improvement at first.
Still I imagine that the embedding makes it so that points in the new space are more equally placed, am I right? Then I don’t understand several things:
-How does the context where one words appear come into play? Other words which are often close by will also be represented by closer points in the new space?
-Why does it have to be integers? And why is it more applied to word encodings? I mean we could imagine the same process could be helpful for images as well. Or is it just a dimension reduction technique tailored for words documents?
Thank you for your insights anyway
Not equally spaced, but spaced in away that preserves or best captures their relationships.
Context defines the relationships captured in the embedding, e.g. what works appear with what other words. Their closeness.
Each word gets one vector. The simplest way is to map words to integers and integers to the index of vectors in a matrix. No other reason.
Great questions!
Jason,Thank you so much for your time and effort!
My question is related to the line-
“e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)”
Here you are using-weights=[embedding_matrix], but there is no relation telling for which word which vector. Then, it produces for each document one 4*100 matrix(example for [ 6 2 0 0]).How it will extract the vectors related to 6,2,0,0 accurately?
The vectors are ordered, with an array index you can retrieve the vector for each word, e.g. word 0, word 1, word 2, work 3, etc.
Hello Jason,
I m searching for interesting formation on Python (numpy pandas … and tools for ML DL and NLP) and formation on Keras !!
Some suggestions please?
This might be a good place to start if you are working on NLP:
Thanks again Jason !!
No problem.
Thanks for article, I will run it for the Italian language documents. Is there any GoogleNews pretrained word2vec covering Italian vocabulary?
Good question, I’m not sure off the cuff sorry.
Hallo jason, Can I put Dropout into this model ?
I don’t see why not.
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)
What does the parameter – weights=[embedding_matrix] – stand for ? weights or inputs for the Embedding Layer ?
The weights are the vectors, e.g. one vector for each word in your vocab.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 4, 8) 400
_________________________________________________________________
flatten_1 (Flatten) (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 433
Trainable params: 433
Non-trainable params: 0
_
Could you please clarify no. of weights learnt for embedding layer. Now we have 10 documents and embedding layer from 4 to 8. How many weights parameters will actually be learnt here. My understanding was that there should only be (4+1)*8 = 40 weights to be learnt, including the bias term. Why is it learning weights for all the documents separately (10*(4+1)*8 = 400) ?
The number of weights in an embedding is vector length times number of vectors (words).
Hi, Jason:
In this example, what type of neural architecture it is? It is not LSTM, not CNN. Is it a Multi-Layer Perceptron model?
We are just working with embeddings. I guess with an output layer, you can call it an MLP.
Hi, Thanks for this excellent article. I tried to use a pre-trained word embedding instead of random number in a Keras-based classifier. However, after constructing the embedding matrix and adding it to the embedding layer as follow, during training epochs all of the accuracy values are the same and no learning happens. However, after removing the “weights=[embedding_matrix]” it works well and reached the accuracy of 90%.
layers.Embedding(input_dim=vocab_size, weights=[embedding_matrix],
output_dim=embedding_dim,
input_length=max_len,trainable=True)
What can be the reason of this strange behavior?
thanks
An embedding specalized on your own data is often better than something generic.
Quick question. While using pre-trained embeddings (MUSE) in an Embeddings layer, is it okay to set trainable=True?
Note: The model doesn’t overfit when i set trainable=True. The model doesn’t predict well if i set trainable=False.
Yes, although you might want to use a small learning rate to ensure you don’t wash away the weights.
Thank you very much for your reply. Currently I am using a learning rate of 0.0001.
“Adam(learning_rate=0.0001)”
Perhaps use SGD instead of Adam as Adam will change the learning rate for each model parameter and could get quite large.
Or at least compare results with adam vs sgd.
Hi Jason,
Thank you for your post. I have an issue. The data I used include both categorical and numerical features. Say, Some features are cost and time while others are post code. How should I write the code? Build separate models and concatenate them together? Thank you
Great question!
The features must be prepared separately then aggregated.
The two ways I like to do this is:
1. Manually. Prepare each feature type separately then concate into input vectors.
2. Multi-Input Model. Prepare features separately and feed different features into different inputs of a model, and let the model concate the features.
Does that help?
Should the categorical be converted into numerical using one-hot encoding?
It can be, or an embedding can be used.
If you have multivariate time series of which you know they are meaningfully connected (say trajectories in x and y), does it make sense to put a Conv layer before feeding them into the embedding?
Could you explain what you mean by “preparing” the features?
No embedding is typically the first step, e.g. an interpretation of input.
Prepared means transformed in whatever way you want, such as encoding or scaling.
Thanks for the answer! If I may ask a follow up: is embedding of multivariate numerical data uncommon? I have seen fairly little work that uses it.
Embedding is used for categorical data, not numerical data.
hi Jason,
I use an example where a put the vocabulary size = 200 and the training sample contain about 20 different words.
When I check the embeddings ( the vectors) using ** layers[0].get_weights()[0]** I obtain an array with 200 rows.
1/ how can I know the vector corresponding to each word (from the 20 words I ‘ve got)?
2/ where the 180 (200 – 20) vectors come from since I use only 20 words?
Thanks in advance.
The vocab size and number of words are the same thing.
I think you might be confusing the size of the embedding and the vocab?
Each word is assigned a number (0, 1, 2, etc), the index of vectors maps to the words, vector 0 is word 0, etc.
Thanks for your answer Jason,
I ‘ll clarify my question:
the vocab size is 200 that means that the number of words is 200.
But effectively i’m working with 20 words only ( the words of my training sample) : let say word[0] to word[19].
So, after the embedding, the vector[0] corresponds to word[0] and so on. but vector[20].. vector [30] … what do they match ?
I have no word[20] or word[30] .
If you define the vocab with 200 words but only have 20 in the training set, the the words not in the training set will have random vectors.
Ok. Thank you.
I want to save my own pretrained model in the same way Golve saved their model as txt file and the word followed by its vector? How I would do that?
thank you
You could extract the weights from the embedding layer via layer.get_weights() then enumerate the vectors and save to a file int he format you prefer.
beginner in python I did not understand what you mean by enumerating..and which layer should I get weight from?…
You can get the vectors from the embedding layer.
You can either hold a reference to the embedding layer from when you constructed the model, or retrieve the layer by index (e.g. model.get_layers()[0]) or by name, if you name it.
Enumerating means looping.
Hello, Jason!
Thanks for the article!
I have been wondering about the input_dim of the learnable embedding layer.
You set it to vocab_size, that in your case is 50 (the hashing trick upper limit), which is much larger than the actual vocabulary size of 15.
The documentation of Embedding in keras says:
“Size of the vocabulary, i.e. maximum integer index + 1.”
Which is ambiguous.
I have experimented with some numbers for vocab_size, and cannot see any systematic difference.
Would it actually matter for more realistically sized examples?
Could you say a couple of words about it?
Thanks again
Smaller vocabs means you will have fewer words/word vectors and in turn a simpler model which is faster/easier to learn. The cost is it might perform worse.
This is the trade-off large/slow but good, small/fast but less good.
Thanks, Jason!
I may have not explained myself properly:
The *actual* number of words in the vocabulary is the same (14).
The difference is the value of input_dim to Embedding().
In the example, you chose 50 as high enough to prevent collisions in encoding, but also
used it as an input_dim in one of the cases.
Michael
I see.
I thought the question is “Size of the vocabulary, i.e. maximum integer index + 1.”. Since there are 14 words in this example, why vocab size isn’t 15, instead of 50?
There is the size of the vocab, there is also the size of the embedding space. They are different – in case that us causing confusion.
We must have size(vocab) + 1 vectors in the embedding, to have space for “unkown”, e.g. vector at index 0.
Jason: In this example, ‘one_hot’ function instead of ‘to_categorical’ function is used. The 2nd is the real one-hot representation, and the 1st is simply creating an integer for each word. Why isn’t to_categorical used here? They are different, right?
The function is badly named, but it does integer encode the words:
Thanks a lot Jason! in 3. Example of Learning an Embedding section, could you please elaborate what is 400 params that are being trained in the embedding layer? Thnx
Yes, each vector is mapped to an 8 element vector, and the vocab is 50 words. Therefore 50*8 = 400.
Jason, why the output shape of embedding layer is: (4,8)?
It should be (50,8) as the vocab size is 50 and we are creating the embeddings of all words in our vocabulary.
vocab size is the total vectors in the layer – the number of words supported, not the output.
The output is the number of input words (8) where each word has the same vector length (4).
Hi i need some help when running the file.
(wordembedding) C:\Users\Criz Lee\Desktop\Python Projects\wordembedding>wordembedding.py
Traceback (most recent call last):
File “C:\Users\Criz Lee\Desktop\Python Projects\wordembedding\wordembedding.py”, line 1, in
from numpy import array
File “C:\Users\Criz Lee\Anaconda3\lib\site-packages\numpy\__init__.py”, line 140, in
from . import _distributor_init
File “C:\Users\Criz Lee\Anaconda3\lib\site-packages\numpy\_distributor_init.py”, line 34, in
from . import _mklinit
ImportError: DLL load failed: The specified module could not be found.
pls advise.. thanks
Looks like there is a problem with your development environment.
This tutorial may help:
Hi jason, i’ve tried the url u provided but still didnt manage to solve it.
basically i typed
1. conda create -n wordembedding
2. activate wordembedding
3. pip install numpy (installed ver 1.16)
4. ran wordembedding.py
error shows
File “C:\Users\Criz Lee\Desktop\Python Projects\wordembedding\wordembedding.py”, line 2, in
from numpy import array
ModuleNotFoundError: No module named ‘numpy’
pls advise.. thanks
Sorry to hear that. I am not an expert in debugging workstations, perhaps try posting to stackoverflow.
Hi, Jason: How to encode a new document using the Tokenizer object fit on training data? It seems there is no function to return an encoder from the tokenizer object.
You can save the tokenizer and use it to prepare new data in an identical manner as you did the training data after the tokenizer was fit.
What do you mean by ‘save the tokenizer’? Tokenizer is an object, not a model.
It is as important as the model, in that sense it is part of the model.
You can save Python objects to file using pickle.
Brief post… and am interested to read about pre-trained word embedding for sequence labeling task.
Great, does the above tutorial help?
Hello Sir,
I am not able to understand the significance of vector space?
You have given 8 for the first problem, glove vectors has 100 dimension for each word.
What is the idea behind these vector spaces and what does each value of the dimension tells us?
Thankyou 🙂
The size of the vector space does not matter too much.
More importantly, the model learns a representation where similar works will have a similar representatioN (coordinate) in the vector space. We don’t have to specify these relationships, they are learned automatically.
Thankyou Sir for your answer. I clearly understood what is vector space.
I have one more question- If I declare the vocabulary size as 50 and if there are more than 50 words in my training data, what happens to those extra words?
For the same reason I could not understand this line of glove vectors-
“The smallest package of embeddings is 822Mb, called “glove.6B.zip“. It was trained on a dataset of one billion tokens (words) with a vocabulary of 400 thousand words.”
What about the 600 thousand words ?
Words not in the vocab are marked as 0 or unknown.
Hi,
I have a list of words as my dataset/training data. So i run your code for glove as follows:
—-Error is——
ValueError Traceback (most recent call last)
in ()
8 values = line.split()
9 words = values[0]
—> 10 coefs = asarray(values[1:], dtype=’float32′)
11 embeddings_index[words] = coefs
12 f.close()
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order)
83
84 “””
—> 85 return array(a, dtype, copy=False, order=order)
86
87
ValueError: could not convert string to float: ‘0.1076.097748’
————————————————————————————-
could you please help
This is a common question that I answer here:
i looked up the FAQ, i do not find any question related to “ValueError: could not convert string to float:”
I recommend the instructions in the specific FAQ I linked to.
Hello Jason Brownlee
I hope you are fine and thanks for writing such an article. I would like to ask an question to you where I am struck.
I had created keras deep learning model with glove word embedding on imdb movie set. I had used 50 dimension but during validation there is lot of gap between accuracy of training and validation data. I have done pre-processing very carefully using custom approach but still I am getting overfitting.
During prediction phase instead of testing dataset, I have taken real time text and goes for prediction but if I do prediction again and again for same input data, results varies. I am trying my best why my results varies.
I have 8 classes which depicts the score of each review and data is categorical. Softmax layer is predicting different result every time same data in fed to trained model
One model will produce the same output for the same input.
But, if you fit the same model on the same data multiple times, you will get different predictions. This is to be expected:
You can reduce the variance by creating an ensemble of final models and combining their predictions.
I have a dataset in which training data are test data in different files. How do I pass test data to the model for evaluation?
Load them into memory separately then use one for train and one for test/val.
Perhaps I don’t understand the problem?
How to give text input to a pre-trained LSTM model with input layer shape (200, ) .
I want to know how convert the text input and give it as an input to the model.
New text must be prepared in an identical way to data used to train the model.
Sir, could you please show me one example or link to code.
Yes, see the examples here:
Thanks you so much for this detailed tutorial. I think that I missed something… When using a Pre-Trained GloVe Embedding, in case you want to do a train test split, when is the right time to perform it?
Should I need to do it after creating the weight matrix?
Sorry, I don’t follow your question – how are the embedding and train/test split related? Can you please elaborate?
Sure. I would like to use the embedding to create a model that will predict a class out of 30 different classes. The dataframe that I’m using contains a text column which I want to use to predict the class. In order to do so, I thought about using the pre-trained embedding (same as you did but with more classes). Now, in order to test the model I want to do a split, so my question is when do you recommend to do it? I tried to create a weight matrix for the all data and then split it to train and test but it gave me a very poor results on the test set.
Any idea what am I doing wrong?
If you train your own embedding, it is prepared on the training dataset.
If you use a pre-trained embedding, not split make sense as it is already trained on different data.
Thanks I think that I understand (sorry I’m kind of newbie) . And assuming that I want to use the trained model to predict the labels on an unseen data, what should be the input? Should it be a padded docs with the same shape?
New input must be prepared in an identical way to the training data.
Hello, another great post!
I would like to ask, regarding with the Example of Using Pre-Trained Embedding, is it possible to reduce the vocabulary size? I had pre-trained embedding with 300 dimension. My vocabulary size is 7000.
When I put vocabulary size=300 I have this error:
embedding_matrix[i] = embedding_vector
IndexError: index 300 is out of bounds for axis 0 with size 300
Thanks in advance
Yes, but you must set the vocab size in the Tokenizer.
Thank you for the quick response. I did it.
Nice work!
Thanks for the tutorials! I’m using the 300d embedding for my image to caption model, but filter out rare words in my vocabulary. Say I have 10k words, should I just:
tokenizer.word_index.get(word, vocab_size + 1) <= vocab_size
to filter out the words I don't want?
Also, do you think it's worth retraining the embedding weights at a later point in training to fine-tune? I'm thinking of it in the same context as freezing a pre-trained encoder, then gradually unfreezing layers as the decoder reaches a certain level of validity.
You can control the vocab size by setting the “num_words” argument when constructing the tokenizer:
Perhaps try with and without fine tuning and compare the results.
Hi Jason,
I have a question regarding the code. You have taken the vocab_size to be 1 greater than the number of unique words. I am unable to understand why did you do that, can you please tell me.
I am a total newbie and I’m not from a programming background so I’m sorry if this was a silly question.
Thank You
Yes, we use a 1-offset for words and save index 0 for “unknown” words.
Oh…Thanks jason
You’re welcome.
Hi Jason,
I work on a multivariate time series with categoricals and numericals features. and I use a data of 3 years : 20 stocks during 30 days as window of training and 20 stocks during 7 days as target X_train.shape = (N_samples, 20*30, N_features), y_train.shape = (N_samples, 20*7, N_features), my question is, how I can apply an embedding layer for 3 categorical variables for this 3D arrays ?
I tried to use this part of code but it doesn’t work :
cat1_input = Input(shape=(1,), name=’cat1′)
cat2_input = Input(shape=(1,), name=’cat2′)
cat3_input = Input(shape=(1,), name=’cat3′)
cat1_emb = Flatten()(Embedding(33, 1)(cat1_input))
cat2_emb = Flatten()(Embedding(18, 1)(cat2_input))
cat3_emb = Flatten()(Embedding( 2, 1)(cat3_input))
See this tutorial:
Thank you for your prompt reply Jason, this tutorial is about Embedding for 2D array but in my case I need to build a model that take 3D array (N_samples, time_steps, N_features) as input and (time_steps,N_stocks) as output. | https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/ | CC-MAIN-2021-04 | refinedweb | 16,939 | 66.33 |
PDF to Image Conversion in Java
By Geertjan-Oracle on Sep 01, 2012
In the past, I created a NetBeans plugin for loading images as slides into NetBeans IDE. That means you had to manually create an image from each slide first. So, this time, I took it a step further. You can choose a PDF file, which is then automatically converted to an image for each page, each of which is presented as a node that can be clicked to open the slide in the main window.
As you can see, the remaining problem is font rendering. Currently I'm using PDFBox. Any alternatives that render font better?
This is the createKeys method of the child factory, ideally it would be replaced by code from some other library that handles font rendering better:
@Override protected boolean createKeys(List<ImageObject> list) { mylist = new ArrayList<ImageObject>(); try { if (file != null) { ProgressHandle handle = ProgressHandleFactory.createHandle( "Creating images from " + file.getPath()); handle.start(); PDDocument document = PDDocument.load(file); List<PDPage> pages = document.getDocumentCatalog().getAllPages(); for (int i = 0; i < pages.size(); i++) { PDPage pDPage = pages.get(i); mylist.add(new ImageObject(pDPage.convertToImage(), i)); } handle.finish(); } list.addAll(mylist); } catch (IOException ex) { Exceptions.printStackTrace(ex); } return true; }
The import statements from PDFBox are as follows:
import org.apache.pdfbox.pdmodel.PDDocument; import org.apache.pdfbox.pdmodel.PDPage;
I am using Pdf-renderer
and I am quite satisfied
Posted by kvaso on September 01, 2012 at 11:09 PM PDT #
You could download the LGPL version of JPedal from and generate the images using this code
/**instance of PdfDecoder to convert PDF into image*/
PdfDecoder decode_pdf = new PdfDecoder(true);
/**set mappings for non-embedded fonts to use*/
FontMappings.setFontReplacements();
/**open the PDF file - can also be a URL or a byte array*/
try {
decode_pdf.openPdfFile("C:/myPDF.pdf"); //file
//decode_pdf.openPdfFile("C:/myPDF.pdf", "password"); //encrypted file
//decode_pdf.openPdfArray(bytes); //bytes is byte[] array with PDF
//decode_pdf.openPdfFileFromURL("",false);
/**get page 1 as an image*/
//page range if you want to extract all pages with a loop
int start = 1, end = decode_pdf.getPageCount();
for(int i=start;i<end+1;i++)
BufferedImage img=decode_pdf.getPageAsImage(i);
/**close the pdf file*/
decode_pdf.closePdfFile();
} catch (PdfException e) {
e.printStackTrace();
}
The PDF file format has a lot of exceptions (Adobe tries to make sure as many files as possible open in Acrobat even if they do not meet the spec). So there are lots of 'gotchas' in the PDF world (like all TrueType fonts are Mac encoded unless they are not!). We write up the more interesting cases on our blog at
We also built the Open source library into our PDF viewer plugin for NetBeans... Come to our talk at JavaOne and we will be talking about it (and NetBeans/JavaFX)!
Posted by mark stephens on September 03, 2012 at 01:09 PM PDT #
I have recently started to use Aspose products () because they are getting alot of popularity among developers and users who are using them and they also provide java code for users also related to various conversion like in last week they provide code for converting a specific or all pdf pages to png which was very useful for developers like me. Below is the code:
Convert particular PDF page to PNG Image
[Java]
//open document
com.aspose.pdf.Document pdfDocument = new com.aspose.pdf.Document("input.pdf");
// create stream object to save the output image
java.io.OutputStream imageStream = new java.io.FileOutputStream("Converted_Image(1), imageStream);
//close the stream
imageStream.close();
Convert all PDF pages to PNG Images
[Java]
//open document
com.aspose.pdf.Document pdfDocument = new com.aspose.pdf.Document("input.pdf");
// loop through all the pages of PDF file
for (int pageCount = 1; pageCount <= pdfDocument.getPages().size(); pageCount++)
{
// create stream object to save the output image
java.io.OutputStream imageStream = new java.io.FileOutputStream("Converted_Image" + pageCount + "(pageCount), imageStream);
//close the stream
imageStream.close();
}
Posted by guest on July 16, 2013 at 12:13 AM PDT #
Hi Geertjan
Have you looked at our java library products?
These are commercial products. We have jPDFImages that can convert PDF pages to images. We also have jPDFViewer which can directly take and render a PDF document.
We have an advanced support for fonts, including font substitution, but if you run into any specific font related issue, contact us at support@qoppa.com.
Leila
Posted by Leila Holmann on August 13, 2013 at 08:38 AM PDT #
The font problem is still in the 1.8.x versions of PDFBox, but it is solved in the unreleased 2.0 version. The API is slightly different, but it is easy to find out by looking at the examples (PDFToImage) or at the test cases.
Posted by Tilman on June 14, 2014 at 02:04 PM PDT # | https://blogs.oracle.com/geertjan/entry/pdf_to_image_conversion_in | CC-MAIN-2015-18 | refinedweb | 800 | 57.87 |
Frequently asked questions
deFAQ - FAQ in German - work in progress
Contents
- 1 General
- 1.1 What is Inkscape?
- 1.2 What is vector graphics?
- 1.3 What is 'Scalable Vector Graphics'?
- 1.4 Is Inkscape ready for regular users to use?
- 1.5 What platforms does Inkscape run on?
- ....) For more information, see #SVG topics below.
Is Inkscape ready for regular users to use?!
What platforms does Inkscape run on?
We provide.
How did Inkscape start?
Inkscape was started as a fork of Sodipodi, in late 2003, by four Sodipodi developers: Bryce Harrington, MenTaLguY, Nathan Hurst, and Ted Gould. Our.
What does 'Inkscape' mean?
The name is made up of the two English words 'ink' and 'scape'..
Can I create webpages with it?
Not yet, although many users use Inkscape for webpage mockups or generating web imagery.
Can I create animations with it?
No, Inkscape does not support SVG animation yet. It is for static 2-D graphics. However you can export graphics from Inkscape to use in Flash or GIF animations. And since February 2006, Blender can import SVG data and extrude it to render 3D graphics.
Will there be an Inkscape 1.00? What would it be like?
Assuming development continues steadily, we will inevitably hit 1.00, but no particular date has been discussed yet.
Before going gold with any kind of 1.00 release, there would be a significant effort to tie down loose ends, a push for greater stability and smoothing off of rough edges. This would be a time consuming process and until it does happen Inkscape may be subject to substantial changes between releases.
Contributing to Inkscape
How can I help?
Grab the code and start hacking on whatever draws your attention. Send in a patch when you're happy with it and ready to share your efforts with others. We also need writers and translators for the user manual and interface internationalization (I18N) files..
Are there non-coding ways to help?
Certainly! While there is certainly a lot of coding work to be done, there are also a lot of other non-programming tasks needed to make the project successful:
Bug wrangling and testing:
Identifying and characterizing bugs can help a HUGE amount by reducing the amount of development time required to fix them.
- Find and report bugs. This is a critical need for ensuring the quality of the code.
- Review and verify reported bugs. Sometimes the bug reports don't have enough info, or are hard to reproduce. Try seeing if the bug occurs for you too, and add details to the description.
- Performance Testing - Create SVG's that stress out Inkscape, and post them as test cases to the Inkscape bug tracker, with your time measurements.
- Compatibility Testing. Compare the rendering of SVG's in Inkscape with other apps like Batik and Cairo, and report differences found (to both projects).
- Bug prioritization. Bugs that are marked priority '5' are new bugs. Review them and set them to high/medium/low priority according to their severity. See Updating Tracker Items in wiki for details.
Helping fellow users
In addition to making a good drawing application, it's also extremely important to us to build a good community around it; you can help us achieve this goal directly, by helping other users. Above all, keep in mind that we want to maintain Inkscape's community as a nice, polite place so encourage good behavior through your own interactions with others in the group.
- Write tutorials. If something isn't already documented in a tutorial, write up a description of how to use it.
- Participate on inkscape-user@. Answer questions that pop up on the mailing list from other users. Also, share your tips and tricks, and demo new ways of using Inkscape for cool stuff.
- Create clipart. You can upload it to the openclipart.org project.
- Give Inkscape classes. Teach people local to you about using Inkscape. Or give presentations at local events, Linux group meetings, etc. about Inkscape (and other Open Source art tools).
Development (no coding needed)
- Translations. Information on how to create translations for the interface is available on the TranslationInformation page in Wiki.
- Design Icons and SVG themes. Create new icons for existing themes or start a new icon theme. Also see
-ists in Wiki.
- Add extensions. For file input/output, special features, etc. Inkscape is able to tie into external programs. Create new .inx files to hook these up for use in Inkscape. Also, if you're comfortable scripting in Perl, Python, etc. have a shot at improving the extensions, too!
- Add source code documentation The source code needs even the simplest documentation in some places, documenting functions will certainly help the next coder.
- Create templates. See the Inkscape share/templates directory.
- Work in Wiki. Wiki is a great place for gathering development info but always needs updating, copyediting, and elaboration.
- Plan future development. Review and help update the Roadmap in Wiki. Basically, talk with developers about what they're working on, planning to work on, or recently finished, and update the roadmap accordingly.
Spread the word - Inkscape Marketing and Evangelism
Increasing the size of the userbase is important. The network effects of more interested users means more potential contributors and hopefully people saying nice things about us, and giving Inkscape word of mouth advertising which we believe is important. All our users and developers serve as ambassadors for Inkscape and others will judge Inkscape based on how well we behave. It is important that we all be polite and friendly and make Inkscape a project people like using and enjoy working on, all other evangelism follows on naturally from there. Generally though for building the community we prefer quality over quantity so be careful not to go too overboard with evangelizing or the "hard sell". We want to work with other applications, rather than "killing" off other software and such comments are counter productive. We need to manage expectations. We want users to be pleasantly surprised by how much Inkscape does, not disappointed that it does not match other programs feature for feature. Inkscape should be thought of as providing artists another way to be creative which complements their existing skills and tools.
- Write Articles. Get articles published in various online (or even printed) magazines and blogs. Don't forget to include a link to Inkscape!
- Create Screenshots. Especially for new features.
- Create Examples. Examples are useful for showcasing different ways Inkscape can be used. Create some screenshots and text, and submit to the web wranglers (via the inkscape-devel mailing list) to add to the site.
- Work on the Website. Help on the website is ALWAYS appreciated. Knowledge of HTML is required; PHP know-how is helpful. Check out the website code from the.
- Recruit more developers. Find people with an interest in doing coding, and encourage them to work on Inkscape.
Here's one:
Feel free to contribute your own banners or buttons for promoting Inkscape. The best ones will be linked here.
Using Inkscape
How do I rotate objects?
Inkscape follows the convention used by CorelDraw, Xara and some other programs: instead of a separate "rorate" tool, you switch to Selector (arrow), click to select, and then click selected objects again. The handles around the object become rotation handles - drag them to rotate. You can also use the Tranasform dialog for precise rotation and the [, ] keys to rotate selection from the keyboard.
How do I change the color of text?
Text is not different from any other type of object in Inkscape. You can paint its fill and stroke with any color, as you would do with any object. Swatches palette, Fill and Stroke dialog, pasting style - all this works on texts exactly as it does on, for example, rectangles. Moreover, if in the Text tool you select part of a text by Shift+arrows or mouse drag, any color setting method will apply only to the selected part of the text.
How to insert math symbols or other special symbols in the drawing?
When editing text on canvas, press Ctrl+U, then type the Unicode code point of the symbol you need. A preview of the symbol is shown in the statusbar. When done, press Enter. A list of Unicode codes can be found at; for example, the integral sign character is "222b". You must have a font installed on your system that has this character; otherwise what you'll see is a placeholder rectangle.
When editing text on the Text tab of the Text and Font dialog, you can use any GTK input modes that your GTK installation supports. Consult GTK documentation for details. measure distances and angles?
Inkscape does not yet have a dedicated Measure tool. However, the Pen tool can be used in its stead. Switch to Pen (Shift+F6), click at one end of the segment you want to measure, and move the mouse (without clicking) to its other end. In the statusbar, you will see the distance and angle measurement. Then press Esc to cancel.
The angle is measured by default from 3 o'clock origin counterclockwise (the mathematical convention), but in Preferences you can switch this to using compass-like measurement (from 12 o'clock, clockwise).
Starting from 0.44 we also have the Measure Path extension that will measure the length of an arbitrary path.
Does Inkscape support palettes? Where can I "store" and save colours for further use? is to use the Blend extension to create a blend between two curved paths painted with different colors or opacity levels; with enough intermediate steps, such a blend will look almost like an arbitrarily curved gradient.
I'm trying to make a colored tiling of clones, but the tiles refuse to change color.
The original object from which you're cloning must have its fill make Alt+click and Alt+drag work on Linux?
Alt+click and Alt+drag are very useful Inkscape shortcuts ("select under" and "move selected" in Selector, "node sculpting" in Node tool). However, on Linux Alt+click and Alt+drag are often reserved by the window manager for manipulating the windows. You need to disable this function in your window manager so it becomes usable in Inkscape.
KDE
For example, in KDE this is done in Control Center > System > Window Behavior > Actions.
XFCE4
Please read
GNOME
Go to System > Preferences > Windows. You are presented with three options to move windows around: "Alt", "Ctrl" or "Super" (Windows logo key). Choose "Super".
I'm having problems with non-Latin filenames on Linux - help!
If your locale charset is not UTF-8, then you need to have this environment variable set:
$ G_BROKEN_FILENAMES=1 $ export G_BROKEN_FILENAMES
This is necessary for Glib filename conversion from the locale charset to UTF-8 (used in SVG) and back to work. See <a href="">this page</a> for more details.
How
- 1. Locate the installation directory.
- 2. Enter the Inkscape\locale directory
- 3. Locate the directory with the two letter locale you don't want to use.
- 4. Rename (or remove) this directory to something like disable_de or x_es
- 5. Restart inkscape and the default English (en) locale will be used.
- Beware, this change the behaviour for all inkscape users on this machine
I installed a new font on my Windows system, but Inkscape does not see it.
This was a bug in versions of Inkscape up to 0.43, caused by using an obsolete font cache. This cache is stored in the file called
.fonts.cache-1. This file may be in your Windows folder, or in your Temp folder, or in "My documents" folder, or in the folder listed in the $HOME environment variable. Use file search by name to locate this file. Then simply delete this file and restart Inkscape; now it will see the new fonts.).
SVG topics
Are Inkscape's SVG documents valid SVG?
Yes. Inkscape does not yet support all features of SVG, but all files it generates are valid SVG (with the partial and temporary exception of flowed text, see below). All standard-conformant SVG renderers show them the same as in Inkscape. If they do not, it's a bug. If this bug is in Inkscape, we will fix it (especially if you help us by reporting it!).
What about flowed text?
When flowed text support was added to Inkscape, it was conformant to the then-current unfinished draft of SVG 1.2 specification (and was always described as an experimental feature). Unfortunately, in further SVG 1.2 drafts, the W3C decided to change the way this feature is specified. Currently SVG 1.2 is still not finished, and as a result, very few SVG renderers currently implement either the old or the new syntax of SVG 1.2 flowed text. So, technically, Inkscape SVG files that use flowed text are not valid SVG 1.1, and usually cause problems (errors or just black boxes with no text).
However, due to the utility of this much-requested feature, we decided to leave it available to users. When the final SVG 1.2 specification is published, we will change our flowed text implementation to be fully conformant to it, and will provide a way to migrate the older flowed text objects to the new format.
Until that is done, however, you should not use flowed text in documents that you intend to use outside of Inkscape. Flowed text is created by clicking and dragging in the Text tool, while simple click creates plain SVG 1.1 text; so, if you don't really need the flowing aspect, just use click to position text cursor instead of dragging to create a frame. If however you really need flowed text, you will have to convert it to regular (non-flowed) text by the "Convert to text" command in the Text menu. This command fully preserves the appearance and formatting of your flowed text but makes it non-flowed and SVG 1.1-compliant.
What, then, is "Inkscape SVG" as opposed to "Plain SVG" when saving a document?
Inkscape SVG files use the Inkscape namespace to store some extra information used by the program. Other SVG programs will not understand these extensions, but this is OK because the extensions only affect how the document is edited, not how it looks. Extensions must not cause any rendering problems in SVG-compliant renderers. However, some non-compliant renderers may have trouble with the presence of the extensions, or you may want to save some space by dropping the Inkscape information (if you're not planning to edit the file in Inkscape again). This is what the "Plain SVG" option is provided for.
What SVG features does Inkscape implement?
The main parts of SVG that Inkscape does not support yet are filters, it *does* make changes:
- All objects will get unique "id" attributes. If already existing and unique, they will be preserved, otherwise one will be derived from node name.
- Some sodipodi: and inkscape: namespaced metadata will be added to the beginning of document.
- If you edit a gradient, that gradient will be broken up into 2 linked gradients - one defining color vector, another one position.
- Changing any style property forces reconstructing of the whole 'style' attribute, which means CSS (not XML) comments will be lost and formatting of CSS may change.
- The formatting style of the SVG file will be changed to follow the style hardcoded into Inkscape.
There is ongoing work to allow Inkscape to better preserve hand-created SVG markup but it is a very difficult task requiring a lot of infrastructure work and will happen very gradually - but help is always appreciated.
Inkscape and renderer X show my SVGs differently. What to do?
That depends on X. We accept Batik and Adobe SVG plugin as authoritative SVG renderers because they are backed by some of the the authors of the SVG standard and really care about compliance. This may not be true for other renderers. So if you are having a problem with some renderer, please try the same file with either Batik or Adobe, or better yet, with both (they are free and cross-platform). If you still see a discrepancy with Inkscape rendering, we want to look into it. Please submit a bug; don't forget to attach a sample of the problem file to the bug report, and ideally include screenshots too.
Inkscape and other programs
Why the split from Sodipodi?
Inkscape started as a code fork of Sodipodi. The main reasons were differences in objectives and in development approach. Inkscape's objective is to be a fully compliant SVG editor, whereas for Sodipodi SVG is more a means-to-an-end of being a vector illustration tool. Inkscape's development approach emphasizes open developer access to the codebase, as well as using and contributing back to 3rd party libraries and standards such as HIG, CSS, etc. in preference to custom solutions. Reusing existing shared solutions helps developer to focus on the core work of Inkscape.
For background, it may also be worth reviewing Lauris' [ Sodipodi direction] post from Oct 2003, and his thoughts on SVG, licensing, and the value of splitting the project into two independent branches.
What's the difference between Inkscape and Dia?
Dia is for technical diagrams like database charts, class diagrams, etc., whereas Inkscape is for vector drawing such as logos, posters, scalable icons, etc.
SVG is a useful format for creating diagrams, though, so we hope as Inkscape grows as a fully-featured SVG editor, it will also be useful for making attractive diagrams too. Several of us hope Inkscape will become a useful technical drawing tool and work on features with that goal in mind. However, Dia provides a number of useful capabilities such as support for UML, autogeneration of diagrams, etc. that are well beyond the scope of a general SVG editor. Ideally both Inkscape and Dia can share various bits of code infrastructure and third party libraries.
Is this intended to replace Flash?
While SVG is often identified as a "Flash replacement", SVG has a huge range of other uses outside that of vector animation. Replacing Flash is not one of Inkscape's primary intents. If SVG can replace Flash, and Inkscape can help, that's great, but there's a lot more to SVG than web animation that is worth exploring. (See also SMIL).
However, currently bitmap editors are often used for common tasks they are not well equipped for, such as creating web page layouts, logos, or technical line art. In most cases, this is because users are not aware of the power (or even the existence) of the modern vector editors. Inkscape wants to amend this situation, and to raise a vector editor to the status of an essential desktop tool for everyone, rather than an exotic specialized tool that only professionals use.
Will Inkscape be part of the Gnome-Office?
Inkscape will need to mature a bit further before this can be considered. Specifically, better support for embedding (Bonobo) is needed, and the Gnome-Print subsystem needs to be tested more thoroughly (help very much appreciated here). If you can compile a recent version of Inkscape and help us with testing it would be very useful.
What formats can Inkscape import/export?. CGM file in OpenOffice Impress. Copy to Open Office Draw and isert orinibal JPG or another bitmap graphics. Save file as ODG And you can continue in Open Office Draw program.)
- Select all (CTRL+A)
- Export as SVG.
- Open SVG file in Inkscape and correct mistakes if they appear.
I exported an SVG file from Adobe Illustrator, edited it in Inkscape, and imported back to AI, but there my changes are lost!
That's because Adobe cheats. It creates a valid SVG, but apart from the SVG code it also writes to the file, in encoded binary form, the entire AI-format source file of the image. Inkscape, of course, edits the SVG part of the image and leaves the encoded binary untouched.?
The codebase Inkscape inherited from Sodipodi was C/Gtk based. There is an ongoing effort to convert the codebase to C++/Gtkmm. The ultimate goal is to simplify the code and make it more maintainable. We invite you to join us. Just don't mention Qt. :)
What is your position on code sharing with other projects?
Yes, sharing of code libraries with other projects is highly desirable, provided the right conditions exist. A good candidate for a library will be mature, widely distributed, well documented, and actively maintained. It should not introduce massive dependency problems for end-users and should be stable, powerful, and lightweight. It should strive to do one thing, and do it well. Libraries that don't meet all the criteria will be considered on a case-by-case basis.
How to create an Inkscape extension?
You don't need to know much, if anything, about Inkscape internals to create a useful extension. Aaron Spike, the author of most Python extensions that come with Inkscape, wrote a helpful web page (including a series of tutorials) on creating extensions in Python (Perl and Ruby are also supported).
What's a good way to get familiar with the code?
You can start with the Doxygen documentation. There you can find not only the usual Doxygen stuff but also different categorized views into the inkscape source.
In the Documentation section of the Inkscape website you can find some high-level diagrams and links to other documentation that's been produced such as the man page. Historically, this codebase has not been kept well documented so expect to find many areas where the only recourse is to work through the code itself. However, we place importance on working to change this, and to flesh out the documentation further as we go.
Some developers have found that testing patches is a good way to quickly get exposure to the code, as you see how other developers have approached making changes to the codebase. Other developers like to pick an interesting feature request (or perhaps a feature wish of their own) and focus on figuring out how to implement it. Occasionally we also have large scale grunt-work type changes that need to be applied to the codebase, and these can be easy ways to provide significant contributions with very little experience.
Getting beyond initial exposure, to the next stage of understanding of the codebase, is challenging due to the lack of documentation, however with some determination it can be done. Some developers find that fixing a crash bug by tracing execution through the various subsystems, brings good insights into program flow. Sometimes it is educational to start from an interesting dialog box and tracing function calls in the code. Or perhaps to start with the SVG file loader and follow the flow into and through the parser. Other developers have found that writing inline comments into the code files to be highly useful in gaining understanding of a particular area, with the fringe benefit of making that bit of code easy for future developers to pick up, too.
Once you feel far enough up the learning curve, implementing features will firm up your experience and understanding of the codebase. Be certain to also write test cases and documentation, as this will be of great help to future developers and thus ensure the longevity of the codebase.
What is the size and composition of the codebase?
The latest statistics are available at..
How are feature requests selected for implementing?
Many developers become involved because they wish to "scratch an itch", so of course if they wish to work on a particular feature, then by definition that one will receive implementational attention. This is the primary mechanism by which features get implemented.
Inkscape also strives to take user requests for features seriously, especially if they're easy to do or mesh with what one of the existing developers already wants to do, or if the user has helped the project in other ways.
If you have a feature that you'd really like to see implemented, but others aren't working on, the right thing to do is delve into the code and develop it yourself. We put great importance on keeping the development process open and straightforward with exactly this in mind.
I'd prefer the interface to look like ...
Understandably, many users are accustomed to other programs (such as Illustrator, the GIMP, etc.) and would prefer Inkscape to follow them in design. Inkscape developers are constantly examining other projects and on the look for better interface ideas. A large motivation is to make the application follow the GNOME Human Interface Guidelines, which has a number of rules in how the interface is made. The Inkscape developers also seek advice and ideas from other GUI app designers, such as the GIMP crew, AbiWord, and Gnumeric; they've been at it longer and we view them as an excellent source of battle tested experience.
But please understand that the Inkscape interface will, at the end of the day, be the "Inkscape interface". We will strive to find our own balance of compatibility with common drawing programs, wishes of our userbase, good workflow, creativity of our developers, and compliance with UI guidelines. It's unlikely that this balance will meet every user's wish, or achieve 100% compliance with the various platform specific Interface Guidelines, or include every developer's idea, and if it did it probably wouldn't be as good. ;-)
Usually when we discuss interface look and feel, we arrive at the conclusion that, really, it should be configurable so that each user can flip a few switches and get an app that is most cozy to them. However, flexibility should not be used as an excuse not to make tough decisions when they are called for. | https://wiki.inkscape.org/wiki/index.php?title=Frequently_asked_questions&direction=next&oldid=6697 | CC-MAIN-2019-47 | refinedweb | 4,310 | 64.61 |
Changes to class members break some explicit imports
With the latest changes to the way the compiler handles macro members, there are issues with explicit imports of classes containing macro members.
On today's itask compiler with today's clean-platform, it seems that any program importing Data.Foldable fails to build when building with cpm:
module fold import Data.Foldable Start = 1
The errors:
Error [Data.Foldable.dcl,6,import]: macro foldr not imported
Error [Data.Foldable.dcl,5,import]: function/macro const not imported
Changing lines 5 and 6 of Foldable.dcl to
from Data.Functor import class Functor (..), const from Data.Monoid import class Monoid (..), class Semigroup (..), foldr
yields a run-time error in the compiler:
Run time error, rule '[line:696];15;434' in module 'explicitimports' does not match
(Perhaps this is because Data.Foldable also wants to define foldr?)
Using simple imports of the entire Data.Functor/Monoid modules in Data.Foldable works.
Curiously, the first problem (that macros are not imported) does not occur when compiling with clm, but the second problem (run-time error in the compiler) does occur when compiling with clm, although @mlubbers's clm is apparently not immune to the first problem. | https://gitlab.science.ru.nl/clean-compiler-and-rts/compiler/-/issues/30 | CC-MAIN-2020-45 | refinedweb | 202 | 59.6 |
In order to learn about RxSwift, I wanted to come up with a completely contrived demo application that is written entirely in vanilla UIKit. We can then, step by step, convert that application to use RxSwift. The first couple of steps will be a bit hamfisted, but this will allow new concepts to be introduced slowly.
In part 1 of this series, we saw a visual representation of what this app does:
This app is written using a storyboard (generally I prefer
XIBs, but
that’s a discussion for another day), and has a single
UIViewController.
The entirety of that view controller is below:
import UIKit." } }
As you can see, there’s not much to it. There is a
UILabel to show how many
times a button has been tapped. That button isn’t stored in the class, because
it’s wired up to an
IBAction. There’s no need to store it.
Unfortunately, we have to manually keep track of how many times the button has
been tapped. This data is stored in
count.
The
@IBAction func onButtonTap(sender:) is the aforementioned
IBAction, which
is wired up in Interface Builder, and is called by UIKit when the button is tapped.
Naturally, this is all super easy code, and there’s not much to it.
You can see all of this code on GitHub. Note that this is one commit in that repository; if you want to cheat and read ahead, you can look at the commits that follow. Once this series is over, you can walk forward and backward through time by checking out each individual commit.
Converting to Rx
The first step to converting this to use Rx is to think of what the inputs and outputs are. What causes things to happen? What causes us to do a computation, or change what we present to the user?
In such a simple app, it’s quickly obvious that the
UIButton being tapped is
what kicks off a computation, and a change in state. As the button is tapped,
we will need to continue to change the value of
count.
Marble Diagrams
How would this transition of state look? Let’s model it over time:
---[tap]---[tap]---[tap]--->
equates to
---[ 1 ]---[ 2 ]---[ 3 ]--->
The above is a crude representation of a marble diagram. A marble diagram is a way of representing signals in the Rx world. The bar represents time. Above, quite obviously, we start on the left and work toward the right. Each tap on the upper diagram yields a different value on the bottom diagram.
Marble diagrams are great ways to show how operators work in Rx. A great example
is
map: the input is at the top, the output is at the bottom, and the
map operation is in the middle:
In the example, the map is simply multiplying the input by 10, so 1 becomes 10, 2 becomes 20, and 3 becomes 30.
Aside: You’ll notice in some marble diagrams the arrows aren’t arrows at all,
but actually lines. They’ll end with either
| or
X. A pipe represents
a stream that has completed. These streams have declared that they will never
signal again. An X represents an error. Streams that error do not continue
to signal events after the error.
Coming back to our streams:
---[tap]---[tap]---[tap]--->
equates to
---[ 1 ]---[ 2 ]---[ 3 ]--->
Clearly, the place we start is with the
UIButton being tapped. In order to get
access to that button programmatically, we’ll need to add it to our view controller.
I’ve done so, and called it
button:
@IBOutlet weak var button: UIButton!
RxCocoa
RxSwift is a foundation; it works for any sort of Swift and is not specific to user interfaces, network calls, nor anything else. RxCocoa is, in short, UIKit wrapped in Rx. For work done on user interfaces, you’ll need to:
import RxSwift import RxCocoa
Most UIKit controls will have reactive extensions, and in general, they’ll
be exposed to developers in the
rx property.
So, to get to the stream that represents taps of the button in our view controller,
we need to use
button.rx.tap.
Observables
button.rx.tap is a variable that returns a
ControlEvent. A
ControlEvent
is a special kind of something else: an
Observable.
Every time that I’ve said “stream”, what I’m really saying is “Observable”.
Observables are the way streams are represented in Rx. You can perform many
operations on observables; that’s what the entire RxMarbles site is for.
Most things that you work with in Rx are related to, or can be converted to,
an
Observable. In fact, most higher-order types like
ControlEvent can
be converted to
Observables by using
.asObservable().
At the end of the day, just remember that an
Observable is simply a
representation of a stream of events over time.
Subscriptions
Generally speaking, the last operation you’ll perform on an
Observable—on a stream—is to take action based on that stream
signaling. In our case, how do we take action every time the button is tapped?
We will
Observable. This allows us to provide a closure
where we run whatever code we need. So, our code now looks like this:
self.button.rx.tap .subscribe(onNext: { _ in })
So what do we do inside this closure, when the
Observable signals?
Wiring Up
For this first step, we’ll simply use the existing method we wrote for the
procedural version of the app:
@IBAction func onButtonTap(sender:). This is
not the right way to do things in the Rx world, but let’s take things
slowly, one step at a time. Thus, our new chain looks like this:
self.button.rx.tap .subscribe(onNext: { _ in self.onButtonTap(sender: self.button) }
Since we don’t need
onButtonTap(sender:) to be an
@IBAction anymore, we can
get rid of the
sender parameter. That cleans things up nicely:
self.button.rx.tap .subscribe(onNext: { _ in self.onButtonTap() }
Disposables
In principle, we can build and run right now, and things should work. However, if we do, we’ll see a build warning:
Uh, what‽
In RxSwift, it’s important to clean up after yourself, and terminate
Observables,
especially network requests. Without getting too deep into the weeds, there is
basically only one rule: when you see the above warning, add that object to a
DisposeBag.
In our case, we’ll add a
DisposeBag to our
ViewController. This is because the
lifetime of this subscription is tied to the lifetime of our view controller:
private let disposeBag = DisposeBag()
And then we’ll use it in our
subscribe() call:
self.button.rx.tap .subscribe(onNext: { _ in self.onButtonTap() } .addDisposableTo(self.disposeBag)
Don’t let this put you off. There’s really nothing to managing resources, and
having a way to reliably dispose of all active
Observables comes in very handy
from time to time. After a year of doing RxSwift, I’ve never had to
think about disposal, outside of dropping things in a dispose bag.
Generally speaking, each
class/
struct that is doing
subscribe()ing gets
one shared
DisposeBag, and all subscriptions get added to it. That’s it.
Debugging
With the code we have above, it will run, and it will work. However, what if we
want to debug what’s happening within an
Observable chain? Naturally, we can
place a breakpoint within a closure–such as the one we’re providng to
subscribe().
Sometimes, though, you want to see flow, even in places where we don’t have a
closure to interrupt.
Luckily, RxSwift provides an easy way to handle this:
debug(). Let’s change our
chain to include a call to
debug():
self.button.rx.tap .debug("button tap") .subscribe(onNext: { [unowned self] _ in self.onButtonTap() }).addDisposableTo(disposeBag)
And now let’s run the app, and click 3 times. Here’s the console output:
2016-12-15 19:02:31.396: button tap -> subscribed 2016-12-15 19:02:34.045: button tap -> Event next(()) 2016-12-15 19:02:34.584: button tap -> Event next(()) 2016-12-15 19:02:35.161: button tap -> Event next(())
The call to
debug() will tell us when the
Observable is subscribed to, as
well as each time it has an event. As discussed above,
Observables can signal:
Next(with a value)
Error(with an error; represented by a
Xin a marble diagram)
Completed(represented by a
|in a marble diagram)
All of these will be shown by
debug().
Though it’s a bit hard to tell above,
debug() also shows us what value was
signaled. In our case, the button tap is not just a
ControlEvent, but in
actuality a
ControlEvent<Void>. That’s because a button’s tap doesn’t have
any other data to it; all we know is, a tap happened. This is in contrast, say,
to the value of a
UISegmentedControl, where its
value stream is a
ControlEvent<Int>. The
Int is the index of the selected segment. What good
would it be to signal that the selected segment changed without the new selection?
Coming back to our button tap, the
ControlEvent<Void>, which is a special kind
of
Observable, doesn’t really carry a value at all; its value is
Void. In
Swift, we can represent
Void as
(). That’s why you’re seeing
Event next(());
this could alternatively be written as
Event next(Void).
By contrast, if we were signaling with the current count—perhaps after a
map—the above would read:
2016-12-15 19:02:31.396: button tap -> subscribed 2016-12-15 19:02:34.045: button tap -> Event next(1) 2016-12-15 19:02:34.584: button tap -> Event next(2) 2016-12-15 19:02:35.161: button tap -> Event next(3)
At first,
debug() may seem like it’s just cluttering up your console. However,
as we’ll learn in future posts, it’s extremely powerful, and can give you
important insight into how your data is flowing through your streams.
Next Steps
Now we’ve dipped our toe into wiring up a procedural interface with Rx. So far, we haven’t really reaped any benefits. We’re simply calling into our old code differently. Having started here, we’re now one step closer to having a proper, Rx implementation.
In the next post, we’ll start to explore a more Rx-y way of going about
implementing our view controller. This will include the real coup de grâce:
getting rid of
var count. | https://www.caseyliss.com/2016/12/16/rxswift-primer-part-2?utm_source=Swift_Developments&utm_medium=email&utm_campaign=Swift_Developments_Issue_69 | CC-MAIN-2020-45 | refinedweb | 1,758 | 73.27 |
Have you ever used an iterator adapter in Rust?
Called a method on
Option? Spawned a thread?
You’ve almost certainly used a closure. The design in Rust may seem
a little complicated, but it slides right into Rust’s normal ownership
model so let’s reinvent it from scratch.
The new design was introduced in RFC 114, moving Rust to a model for closures similar to C++11’s. The design builds on Rust’s standard trait system to allow for allocation-less statically-dispatched closures, but also giving the choice to opt-in to type-erasure and dynamic dispatch and the benefits that brings. It incorporates elements of inference that “just work” by ensuring that ownership works out.
Steve Klabnik has written some docs on Rust’s closures for the official documentation. I’ve explicitly avoided reading it so far because I’ve always wanted to write this, and I think it’s better to give a totally independent explanation while I have the chance. If something is confusing here, maybe they help clarify.
What’s a closure?
In a sentence: a closure is a function that can directly use variables from the scope in which it is defined. This is often described as the closure closing over or capturing variables (the captures). Collectively, the variables are called the environment.
Syntactically, a closure in Rust is an anonymous function0 value
defined a little like Ruby, with pipes:
|arguments...| body. For
example,
|a, b| a + b defines a closure that takes two arguments and
returns their sum. It’s just like a normal function declaration, with
more inference:
Just like a normal function, they can be called with parentheses:
closure(arguments...).
To illustrate the capturing, this code snippet calls
map on an
Option<i32>, which will call a closure on
the
i32 (if it exists) and create a new
Option containing the
return value of the call.
The closures are capturing the
x and
y variables, allowing them to
be used while mapping. (To be more convincing, imagine they were only
known at runtime, so that one couldn’t just write
val + 3 inside the
closure.)
Back to basics
Now that we have the semantics in mind, take a step back and riddle me
this: how would one implement that sort of generic
map if Rust
didn’t have closures?
The functionality of
Option::map we’re trying to duplicate is (equivalently):
We need to fill in the
... with something that transforms an
X into
a
Y. The biggest constraint for perfectly replacing
Option::map is
that it needs to be generic in some way, so that it works with
absolutely any way we wish to do the transformation. In Rust, that
calls for a generic bounded by a trait.
This trait needs to have a method that converts some specific type
into another. Hence there’ll have to be form of type parameters to
allow the exact types to be specified in generic bounds like
map. There’s two choices: generics in the trait definition (“input
type parameters”) and associated types (“output type parameters”). The
quoted names hint at the choices we should take: the type that gets
input into the transformation should be a generic in the trait, and
the type that is output by the transformation should be an associated
type.1
So, our trait looks something like:
The last question is what sort of
self (if any) the method should
take?
The transformation should be able to incorporate arbitrary information
beyond what is contained in
Input. Without any
self argument, the
method would look like
fn transform(input: Input) -> Self::Output
and the operation could only depend on
Input and global
variables (ick). So we do need one.
The most obvious options are by-reference
&self,
by-mutable-reference
&mut self, or by-value
self. We want to allow
the users of
map to have as much power as possible while still
enabling
map to type-check. At a high-level
self gives
implementers (i.e. the types users define to implement the trait)
the most flexibility, with
&mut self next and
&self the least
flexible. Conversely,
&self gives consumers of the trait
(i.e. functions with generics bounded by the trait) the most
flexibility, and
self the least.
(“Move out” and “mutate” in the implementer column are referring to data stored inside
self.)
Choosing between them is a balance, we usually want to chose the highest row of the table that still allows the consumers to do what they need to do, as that allows the external implementers to do as much as possible.
Starting at the top of that table: we can try
self. This gives
fn
transform(self, input: Input) -> Self::Output. The by-value
self
will consume ownership, and hence
transform can only be called
once. Fortunately,
map only needs to do the transformation once, so
by-value
self works perfectly.
In summary, our
map and its trait look like:
The example from before can then be reimplemented rather verbosely, by
creating structs and implementing
Transform to do the appropriate
conversion for that struct.
We’ve manually implemented something that seems to have the same
semantics as Rust closures, using traits and some structs to store and
manipulate the captures. In fact, the struct has some uncanny
similarities to the “environment” of a closure: it stores a pile of
variables that need to be used in the body of
transform.
How do real closures work?
Just like that, plus a little more flexibility and syntactic
sugar. The real definition of
Option::map is:
FnOnce(X) -> Y is another name for our
Transform<X, Output = Y>
bound, and,
f(x) for
transform.transform(x).
There are three traits for closures, all of which provide the
...(...) call syntax (one could regard them as different kinds of
operator() in C++). They differ only by the
self type of the call
method, and they cover all of the
self options listed above.
These traits are covering exactly the three core ways to handle data in Rust, so having each of them meshes perfectly with Rust’s type-system.
When you write
|args...| code... the compiler will implicitly define
a unique new struct type storing the captured variables, and then
implement one of those traits using the closure’s body, rewriting any
mentions of captured variables to go via the closure’s
environment. The struct type doesn’t have a user visible name, it is
purely internal to the compiler. When the program hits the closure
definition at runtime, it fills in an instance of struct and passes
that instance into whatever it needs to (like we did with our
map
above).
There’s two questions left:
- how are variables captured? (what type are the fields of the environment struct?)
- which trait is used? (what type of
selfis used?)
The compiler answers both by using some local rules to choose the version that will give the most flexibility. The local rules are designed to be able to be checked only knowing the definition the closure, and the types of any variables it captures.2
By “flexibility” I mean the compiler chooses the option that (it thinks) will compile, but imposes the least on the programmer.
Structs and captures
If you’re familiar with closures in C++11, you may recall the
[=]
and
[&] capture lists: capture variables by-value3 and
by-reference respectively. Rust has similar capability: variables can
be captured by-value—the variable is moved into the closure
environment—or by-reference—a reference to the variable is stored
in the closure environment.
By default, the compiler looks at the closure body to see how captured variables are used, and uses that to infers how variables should be captured:
- if a captured variable is only ever used through a shared reference, it is captured by
&reference,
- if it used through a mutable reference (including assignment), it is captured by
&mutreference,
- if it is moved, it is forced to be captured by-value. (NB. using a
Copytype by-value only needs a
&reference, so this rule only applies to non-
Copyones.)
The algorithm seems a little non-trivial, but it matches exactly the mental model of a practiced Rust programmer, using ownership/borrows as precisely as it can. In fact, if a closure is “non-escaping”, that is, never leaves the stack frame in which it is created, I believe this algorithm is perfect: code will compile without needing any annotations about captures.
To summarise, the compiler will capture variables in the way that is
least restrictive in terms of continued use outside the closure (
&
is preferred, then
&mut and lastly by-value), and that still works
for all their uses within the closure. This analysis happens on a
per-variable basis, e.g.:
To focus on the flexibility: since
x is only captured by shared
reference, it is legal for it be used while
closure exists, and
since
y is borrowed (by mutable reference) it can be used once
closure goes out of scope, but
z cannot be used at all, even once
closure is gone, since it has been moved into the
closure value.
The compiler would create code that looks a bit like:
The struct desugaring allows the full power of Rust’s type system is brought to bear on ensuring it isn’t possible to accidentally get a dangling reference or use freed memory or trigger any other memory safety violation by misusing a closure. If there is problematic code, the compiler will point it out.
move and escape
I stated above that the inference is perfect for non-escaping closures… which implies that it is not perfect for “escaping” ones.
If a closure is escaping, that is, if it might leave the stack frame where it is created, it must not contain any references to values inside that stack frame, since those references would be dangling when the closure is used outside that frame: very bad. Fortunately the compiler will emit an error if there’s a risk of that, but returning closures can be useful and so should be possible; for example4:
Looks good, except… it doesn’t actually compile:
The problem is clearer when everything is written as explicit structs:
x only needs to be captured by reference to be used with
+, so the
compiler is inferring that the code can look like:
x goes out of scope at the end of
make_adder so it is illegal to
return something that holds a reference to it.
So how do we fix it? Wouldn’t it be nice if the compiler could tell us…
Well, actually, I omitted the last two lines of the error message above:
A new keyword! The
move keyword can be placed in front of a closure
declaration, and overrides the inference to capture all variables by
value. Going back to the previous section, if the code used
let
closure = move || { /* same code */ } the environment struct would
look like:
Capturing entirely by value is also strictly more general than capturing by reference: the reference types are first-class in Rust, so “capture by reference” is the same as “capture a reference by value”. Thus, unlike C++, there’s little fundamental distinction between capture by reference and by value, and the analysis Rust does is not actually necessary: it just makes programmers’ lives easier.
To demonstrate, the following code will have the same behaviour and
same environment as the first version, by capturing references using
move:
The set of variables that are captured is exactly those that are used
in the body of the closure, there’s no fine-grained capture lists like
in C++11. The
[=] capture list exists as the
move keyword, but
that is all.
We can now solve the original problem of returning from
make_adder:
by writing
move we force the compiler to avoid any
implicit/additional references, ensuring that the closure isn’t tied
to the stack frame of its birth. If we take the compiler’s suggestion
and write
Box::new(move |y| x + y), the code inside the compiler
will look more like:
It is clear that the compiler doesn’t infer when
move is required
(or else we wouldn’t need to write it), but the fact that the
message exists suggests that the compiler does know enough to infer
when
move is necessary or not… in some cases. Unfortunately, doing
so in general in a reliable way (a
help message can be
heuristic/best-effort, but inference built into the language cannot
be), would require more than just an analysis of the internals of the
closure body: it would require more complicated machinery to look at
how/where the closure value is used.
Traits
The actual “function” bit of closures are handled by the traits mentioned above. The implicit struct types will also have implicit implementations of some of those traits, exactly those traits that will actually work for the type.
Let’s start with an example: for the
make_adder example, the
Fn
trait is implemented for the implicit closure struct:
In reality, there are also implicit implementations5 of
FnMut and
FnOnce for
Closure, but
Fn is the “fundamental” one
for this closure.
There’s three traits, and so seven non-empty sets of traits that could6 possibly be implemented… but there’s actually only three interesting configurations:
Fn,
FnMutand
FnOnce,
FnMutand
FnOnce,
- only
FnOnce.
Why? Well, the three closure traits are actually three nested sets:
every closure that implements
Fn can also implement
FnMut (if
&self works,
&mut self also works; proof:
&*self), and similarly
every closure implementing
FnMut can also implement
FnOnce. This
hierarchy is enforced at the type level,
e.g.
FnMut
has declaration:
In words: anything that implements
FnMut must also implement
FnOnce.
There’s no subtlety required when inferring what traits to implement
as the compiler can and will just implement every trait for which
the implementation is legal. This is in-keeping with the “offer
maximum flexibility” rule that was used for the inference of the
capture types, since more traits means more options. The subset nature
of the
Fn* traits means that following this rule will always result
in one of the three sets listed above being implemented.
As an example, this code demonstrates a closure for which an
implementation of
Fn is illegal but both
FnMut and
FnOnce are
fine.
It is illegal to mutate via a
& &mut ..., and
&self is creating
that outer shared reference. If it was
&mut self or
self, it would
be fine: the former is more flexible, so the compiler implements
FnMut for
closure (and also
FnOnce).
Similarly, if
closure was to be
|| drop(v);—that is, move out of
v—it would be illegal to implement either
Fn or
FnMut, since
the
&self (respectively
&mut self) means that the method would be
trying to steal ownership out of borrowed data: criminal.
Flexibility
One of Rust’s goals is to leave choice in the hands of the programmer, allowing their code to be efficient, with abstractions compiling away and just leaving fast machine code. The design of closures to use unique struct types and traits/generics is key to this.
Since each closure has its own type, there’s no compulsory need for heap allocation when using closures: as demonstrated above, the captures can just be placed directly into the struct value. This is a property Rust shares with C++11, allowing closures to be used in essentially any environment, including bare-metal environments.
The unique types does mean that one can’t use different closures
together automatically, e.g. one can’t create a vector of several
distinct closures. They may have different sizes and require different
invocations (different closures correspond to different internal code,
so a different function to call). Fortunately, the use of traits to
abstract over the closure types means one can opt-in to these features
and their benefits “on demand”, via trait objects: returning
the
Box<Fn(i32) -> i32> above used a trait object.
An additional benefit to the approach of unique types and generics means that, by default, the compiler has full information about what closure calls are doing at each call site, and so has the choice to perform key optimisations like inlining. For example, the following snippets compile to the same code,
(When I tested it by placing them into separate functions in a single binary, the compiler actually optimised the second function to a direct call to the first.)
This is all due to how Rust implements generics via monomorphisation, where generic functions are compiled for each way their type parameters are chosen, explicitly substituting the generic type with a concrete one. Unfortunately, this isn’t always an optimisation, as it can result in code bloat, where there are many similar copies of a single function, which is again something that trait objects can tackle: by using a trait object instead, one can use dynamically dispatched closures to ensure there’s only one copy of a function, even if it is used with many different closures.
The final binary will have two copies of
generic_closure, one for
A and one for
B, but only one copy of
closure_object. In fact,
there are implementations of the
Fn* traits for pointers, so one can
even use a trait object directly with
generic_closure,
e.g.
generic_closure((&|x| { ... }) as &Fn(_)): so users of
higher-order functions can choose which trade-off they want themselves.
All of this flexibility falls directly out of using traits7 for closures, and the separate parts are independent and very compositional.
The power closures offer allow one to build high-level, “fluent” APIs
without losing performance compared to writing out the details by
hand. The prime example of this is
iterators: one can write long
chains of calls to adapters like
map and
filter which get
optimised down to efficient C-like code. (For example, I wrote
a post that demonstrates this, and the situation has only
improved since then: the closure design described here was implemented
months later.)
In closing
Rust’s C++11-inspired closures are powerful tools that allow for high-level and efficient code to be build, marrying two properties often in contention. The moving parts of Rust’s closures are built directly from the normal type system with traits, structs and generics, which allows them to automatically gain features like heap allocation and dynamic dispatch, but doesn’t require them.
(Thanks to Steve Klabnik and Aaron Turon for providing feedback on a draft, and many commenters on /r/rust and on IRC for finding inaccuracies and improvements.)
- users
- /r/rust
The Rust
|...| ...syntax is more than just a closure: it’s an anonymous function. In general, it’s possible to have things that are closures but aren’t anonymous (e.g. in Python, functions declared with
def foo():are closures too, they can refer to variables in any scopes in which the
def foois contained). The anonymity refers to the fact that the closure expression is a value, it’s possible to just use it directly and there’s no separate
fn foo() { ... }with the function value referred to via
foo. ↩
This choice is saying that transformers can be overloaded by the starting type, but the ending type is entirely determined by the pair of the transform and the starting type. Using an associated type for the return value is more restrictive (no overloading on return type only) but it gives the compiler a much easier time when inferring types. Using an associated type for the input value too would be too restrictive: it is very useful for the output type to depend on the input type, e.g. a transformation
&'a [i32]to
&'a i32(by e.g. indexing) has the two types connected via the generic lifetime
'a. ↩
This statement isn’t precisely true in practice, e.g.
rustcwill emit different errors if closures are misused in certain equivalent-but-non-identical ways. However, I believe these are just improved diagnostics, not a fundamental language thing… however, I’m not sure. ↩
“By-value” in C++, including
[=], is really “by-copy” (with some copy-elision rules to sometimes elide copies in certain cases), whereas in Rust it is always “by-move”, more similar to rvalue references in C++. ↩
Since closure types are unique and unnameable, the only way to return one is via a trait object, at least until Rust gets something like the “abstract return types” of RFC 105, something much desired for handling closures. This is a little like an interface-checked version of C++11’s
decltype(auto), which, I believe, was also partly motivated by closures with unnameable types. ↩
I wrote an invalid
Fnimplementation because the real version is ugly and much less clear, and doesn’t work with stable compilers at the moment. But since you asked, here is what’s required:
#![feature(unboxed_closures, core)] impl Fn<(i32,)> for Closure { extern "rust-call" fn call(&self, (y,): (i32,)) -> i32 { self.x + y } } impl FnMut<(i32,)> for Closure { extern "rust-call" fn call_mut(&mut self, args: (i32,)) -> i32 { self.call(args) } } impl FnOnce<(i32,)> for Closure { type Output = i32; extern "rust-call" fn call_once(self, args: (i32,)) -> i32 { self.call(args) } }
Just looking at that, one might be able to guess at a few of the reasons that manual implementations of the function traits aren’t stabilised for general use. The only way to create types implementing those traits with the 1.0 compiler is with a closure expression. ↩
I’m ignoring the inheritance, which means that certain sets are actually statically illegal, i.e., without other constraints there are seven possibilities. ↩
C++ has a similar choice, with
std::functionable to provide type erasure/dynamic dispatch for closure types, although it requires separate definition as a library type, and requires allocations. The Rust trait objects are a simple building block in the language, and don’t require allocations (e.g.
&Fn()is a trait object that can be created out of a pointer to the stack). ↩ | https://huonw.github.io/blog/2015/05/finding-closure-in-rust/ | CC-MAIN-2018-09 | refinedweb | 3,682 | 57.4 |
public class ReservedNodeOffering
The currency code for the compute nodes offering.
The duration, in seconds, for which the offering will reserve the node.
The upfront fixed charge you will pay to purchase the specific reserved node offering.
The node type offered by the reserved node offering.
The anticipated utilization of the reserved node, as defined in the reserved node
offering.
The charge to your account regardless of whether you are creating any clusters using
the node offering. Recurring charges are only in effect for heavy-utilization
reserved nodes.
The offering identifier.
The rate you are charged for each hour the cluster that is using the offering
is running. | https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_Redshift_Model_ReservedNodeOffering.htm | CC-MAIN-2018-30 | refinedweb | 108 | 59.4 |
Corrupted heap space using QtXmld4.dll (VS 2010)
hello all,
to make a long story short, here is the code:
@
#include <QtXml\qdom.h>
#include <qdebug.h>
#include <crw.h>
int main(int argc, char *argv[])
{
QFile f(".\app.xml");
QString errorMsg;
int a, b;
QDomDocument doc( "appsettings" );
if( !f.open( QIODevice::ReadOnly ) )
return -1;
if( !doc.setContent( &f, &errorMsg, &a, &b ) ) //here is where I get the exception
{
f.close();
return -2;
}
f.close();
return 0;
}
@
linking libraries : qtmaind.lib, QtCored4.lib, QtXmld4.lib
and, th callstack looks like:
@
ntdll.dll!77690844()
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!77652a74()
ntdll.dll!7760cd87()
QtXmld4.dll!_unlock(int locknum) Line 375 C
QtXmld4.dll!_free_dbg(void * pUserData, int nBlockUse) Line 1270 + 0x7 bytes C++
KernelBase.dll!7582468e()
QtXmld4.dll!_CrtIsValidHeapPointer(const void * pUserData) Line 2036 C++
QtXmld4.dll!_free_dbg_nolock(void * pUserData, int nBlockUse) Line 1322 + 0x9 bytes C++
QtXmld4.dll!_free_dbg(void * pUserData, int nBlockUse) Line 1265 + 0xd bytes C++
QtXmld4.dll!operator delete(void * pUserData) Line 54 + 0x10 bytes C++
QtXmld4.dll!QTextDecoder::`scalar deleting destructor'() + 0x21 bytes C++
QtXmld4.dll!QXmlInputSource::~QXmlInputSource() Line 1357 + 0x22 bytes C++
QtXmld4.dll!QDomDocument::setContent(QIODevice * dev, bool namespaceProcessing, QString * errorMsg, int * errorLine, int * errorColumn) Line 6755 + 0x31 bytes C++
QtXmld4.dll!QDomDocument::setContent(QIODevice * dev, QString * errorMsg, int * errorLine, int * errorColumn) Line 6815 C++
test.exe!main(int argc, char * * argv) Line 17 + 0x19 bytes C++
test.exe!__tmainCRTStartup() Line 278 + 0x19 bytes C
test.exe!mainCRTStartup() Line 189 C
kernel32.dll!76c23677()
ntdll.dll!775f9f02()
ntdll.dll!775f9ed5()
@
(test.exe is main.cpp)...
any ideas?
Thanks,
G
There is no prebuilt binary version of Qt for Visual Studio 2010.
Did you compile Qt yourself with your VS 2010 or did you use the binary from the download page (which is for VS 2008)?
Thank you for the replay.
I build it myself, the only thing that I changed was the runtime-library flag from /MDd to /MTd so my client (to be) will not need to install the redistribution, but I used the configuration provided with the source for vs2010...
also, there is no .NET at all here, and, other things, like qtsqlite and gui that I use work perfectly.
Qt and the client code must be compiled and linked with the same linker flags. Do not mix them, otherwise you might end up using two different C/C++ runtimes, which leads to memory corruption when new is called in one implementation and delete in the other.
well, they are, all /MTd and /MT (for release)
Hm, strange. I've no clue what's going wrong there. Maybe someone else will jump in - I don't have VS2010 at hand to do a check myself.
I hope so, I will try creating a static library set, just to check, worst case, the XML parsing DLL I'm working on will be created using static libraries.
anyone... HELP! hahaha
Are you sure, your Qt libs were build with /MTd and your binary also for debug and all with /MT for release?
It looks (from the defect behavior) very like these do not fit together...
How did you change the flags for Qt? Did you use dependency viewer to verify, Qt dies not use the redistributables?
I will build again and run all the nmake output to a file...
according to the build log (all 5Meg of it) there is no indication of any use of -MDd or -MD, just MT.
I am not that strong in c/c++ but,
according to the callstack we can see that the heap corruption is due to a call of a distructor of the passed &file, or one of it's components. so, we can say that the code itself cannot work with such parameters as -M, because of such call, right?
Hi,
I thopught a bit about this, /MT means link against a static library, that means you add it to all binaries, you create. If you link a dll with /MT, the code is added to that dll. Then you link your executable againtst that and it is also added there. Then you have two different heaps, which leads to that crash. If you use /MT, you MUST use Qt as static library or use /MD (and the vc redistributables).
thank you Gerolf, after sleeping on it, I came also to the same conclusion.
In order for QT to be used in such a way the code must be created with 'awareness' of two (or more) different heap spaces, so, I will be linking QT statically for the xml parser..
and, the more I think about it, the more it makes sense also.
so, thank you all for the help
I rhink, you have to link Qt complete statically, or use /MD. The Qt libraries are not memory neutral (which means, memory allocate din one library may be freed in another one). | https://forum.qt.io/topic/4126/corrupted-heap-space-using-qtxmld4-dll-vs-2010 | CC-MAIN-2017-43 | refinedweb | 828 | 74.9 |
You can subscribe to this list here.
Showing
5
results of 5
Hello Tom,
heavily guessing (I did not try your code)...
Guess #1:
Maybe you use another CPython version (different from 2.1)
Guess #2:
Try renaming the parameters like this (since globals/locals are
functions you might get a sort of name clash):
def import_hook(name, globaldict=None, localdict=None, fromlist=None):
return original_import(name, globaldict, localdict, fromlist)
instead of your version below. I faintly remember a very similar
problem go away using different parameter names.
Lucky if this solves your problem, but no guarantee at all !
Best wishes,
Oti.
[ Russo, Tom ]
<snipped >
> def import_hook(name, globals=None, locals=None, fromlist=None):
> return original_import(name, globals, locals, fromlist) #
> this is where the error occurs
__________________________________________________
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
> The python debugger (pdb module) works well for Jython. See
Thanks. Exposing my newbieness, didn't consider something like that from
python would work with jython (i've never used cpython).
> JSwat - GPL & appears flexible:
Yes, I found this one too. Well, something for me to look at for a little
project on the side :)
Thanks
-Ed
Hi all,
I'm trying to augment the functionality of import so it will try to load
modules out of a database if it doesn't find them on the search path. So,
before anything else, I want to mimic the regular import:
<file name="simple_import.py">
import imp, __builtin__
original_import = __builtin__.__import__
def import_hook(name, globals=None, locals=None, fromlist=None):
return original_import(name, globals, locals, fromlist) #
this is where the error occurs
# Install our modified import function.
__builtin__.__import__ = import_hook
</file>
When I run the following in cpython, everything works fine:
>>> import simple_import
>>> import urllib
However, when I run the same commands with jython2.1 I get:
<output>
Traceback (innermost last):
File "ex.py", line 2, in ?
File
"C:\cygwin\home\Administrator\Discovery\verification\python\shared\simple_im
port.py", line 6, in import_hook
File "C:\jython21\Lib\urllib.py", line 85, in ?
File "C:\jython21\Lib\urllib.py", line 321, in URLopener
File
"C:\cygwin\home\Administrator\Discovery\verification\python\shared\simple_im
port.py", line 6, in import_hook
UnboundLocalError: local: 'globals'
</output>
which seems strange since globals is one of import_hook's formal parameters.
I haven't been able to find this as a documented difference between cpython
and jython-- is it? If not, does anyone know how I can fix this?
thanks
_t
Edward Povazan wrote:
>
> Hello,
>
> Is there a jython debugger in existence?
The python debugger (pdb module) works well for Jython. See
However, it does lack cross-language debugging capability (i.e., you can't step into Java methods).
> If not, has anyone ever
> contemplated what would be necessary to implement one?
I imagine anyone who uses Jython extensively has contemplated it, :-) but I don't think anyone has developed one yet (though I would _love_ to be proven wrong on that!)
It would probably need to be out-of-process (or it couldn't be written in java/jython...), and use the Java Platform Debugger Architecture interfaces (), especially the Wire Protocol to control the VM-to-be-debugged, and then use the Python Debugger to control the Jython interpreter within the VM.
It may be best to enhance an existing Java debugger with knowledge about controlling pdb (or the features pdb is built on - bdb & sys.settrace() )
Candidates for extension include:
JSwat - GPL & appears flexible:
NetBeans IDE - MPL-like & appears big:)
So yeah, I've contemplated it, too. :-)
kb
Hello,
Is there a jython debugger in existence? If not, has anyone ever
contemplated what would be necessary to implement one?
Thanks
-Ed | http://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200202&viewday=14&style=flat | CC-MAIN-2015-32 | refinedweb | 612 | 56.96 |
.
Internationalization, or i18n ('i', followed by 18 letters, then an 'n'), is the process of writing your application so that it can be run in any locale. This means taking into account such things as:
Localization, or l10n ('l', followed by 10 characters, then an 'n'), is the process of taking an internationalized application and adapting it for a specific locale.
Generally speaking, programmers internationalize their applications and translation teams localize them. your application is ready to be localized you have to follow a few simple rules. All user-visible strings in your, defined in klocalizedstring.h, which you must wrap all strings that should be displayed in. The QString returned by i18n() is the translated (if necessary) string. This makes creating translatable widgets as simple as in this example:
#include <klocalizedstring.h> [...] QPushButton* myButton = new QPushButton(i18n("Translate this!")); another method provided: ki18n(). This allows one to mark strings that should be translated later as such. The ki18n() will return a KLocalizedString, which can be finalized into a QString (i.e. translated for real) after the KInstance has been created, using its toString() method.
ki18n() method, i18nc() which takes two const char* arguments. The first argument is an additional contextual description of the second string which will be translated. The first string is used to find the proper corresponding translation at run-time and is shown to translators to help them understand the meaning of the string.
Use i18nc() whenever process.
In the file manager example above, one might therefore write:
contextMenu->addAction(i18nc("verb, to view something", "View")); viewMenu->addAction(i18nc("noun, the view", "View"));
Now the two strings will be properly translatable, both by the human translators and at runtime by KLocale.
Use this form of i18n whenever the string to translate is short or the meaning is hard to discern when the context is not exactly known. For example:
QString up = i18nc("Go one directory up in the hierarchy", "Up"); QString relation = i18nc("A person's name and their familial relationship to you.", "%1 is your %2", name, relationship);
Contexts can also be added when building forms in Qt Designer. Each widget label, including tooltips and whatsthis texts, has a "disambiguation" attribute (named "comment" prior to Qt 4.5), which will serve the same purpose as first argument to i18nc() call.
KDE provides a standard set of strings to identify the semantic context of translatable strings. These are defined in the KUIT Semantic Markup scheme.
Below is a chart showing some common words and phrases in English and the context that must be used with them to ensure proper translation of them in other languages.
Plurals are handled differently from language to language. Many languages have different plurals for 2, 10, 20, 100, etc. When the string you want translated refers to more than one item, you must use the third form of i18n, the i18np(). It takes the singular and plural English forms as its first two arguments, followed by any substitution arguments as usual, but at least one of which should be integer-valued. For example:
msgStr = i18np("1 image in album %2", "%1 images in album %2", numImages, albumName); msgStr = i18np("Delete Group", "Delete Groups", numGroups);
i18np() gets expanded to as many cases as required by the user's language. In English, this is just two forms while in other languages it may be more, depending on the value of the first integer-valued argument.
Note that this form should be used even if the string always refers to more than one item as some languages use a singular form even when referring to a multiple (typically for 21, 31, etc.). This code:
i18n("%1 files were deleted", numFilesDeleted);
is therefore incorrect and should instead be:
i18np("1 file was deleted", "%1 files were deleted", numFilesDeleted);
To provide context as well as pluralization, use i18ncp as in this example:
i18ncp("Personal file", "1 file", "%1 files", numFiles);
In some cases pluralization is needed even if English does not need it. For example:
i18nc("%1 is a comma separated list of platforms the software is available on.", "Available on %1", platforms);
One reason for this is that in some languages the preposition "on" needs to be replaced with a descriptive noun in a certain form, which essentially in English means the same as this:
i18nc("%1 is a comma separated list of platforms the software is available on.", "Available on platforms %1", platforms);
And that is why the correct way is to use pluralization here as well:
i18ncp("%2 is a comma separated list of platforms the software is available on.", "Available on %2", "Available on %2", numberOfPlatforms, platforms); look different based on locale, but one also has to take care of other aspects such as:
QDate only implements a hybrid Julian/Gregorian calendar system, if the user's locale has a different calendar system then any direct calls to QDate will result in incorrect dates being read and displayed, and incorrect date maths being performed. All date calculations and formatting must be performed through the KLocale and KCalendarSystem methods which provide a full set of methods matching those available in QDate. The current locale calendar system can be accessed via KGlobal::locale()->calendar(). The KLocale date formatting methods will always point to the current global calendar system.
KLocale provides, among others, these methods:
To make QML code translatable, KDeclarative provides the same i18n() calls described above. To enable parsing at runtime, you need to install a KDeclarative object in the QDeclarativeEngine:
KDeclarative kdeclarative; //view refers to the QDeclarativeView kdeclarative.setDeclarativeEngine(view.engine()); kdeclarative.initialize(); //binds things like kconfig and icons kdeclarative.setupBindings();
The application also needs to link to libkdeclarative.
There are a number of common problems that may prevent an application being properly localized. See Avoiding Common Localization Pitfalls to learn more about them, and how to avoid them. | https://techbase.kde.org/index.php?title=Development/Tutorials/Localization/i18n&diff=82316&oldid=6132 | CC-MAIN-2018-30 | refinedweb | 977 | 51.18 |
On 12/10/2010 20:14, See. > The code does require Python 2 and the use of except ... as ... requires at least version 2.6. Line 51 The __init__ method should always return None. There's no need to be explicit about it, just use a plain "return". Line 68 Instead of: if not section in self.sections: use: if section not in self.sections: Line 78 This: file(self.path, 'w') will never return None. If it can't open the file then it'll raise an exception. The error message says: "Couldn't open %s to read a template." but it's opening the file for writing. Line 82 You can't really rely on the destructor __del__ being called. Line 333 Shouldn't you be checking that the name of the attribute you're setting doesn't clash with one of the existing attributes? Are you sure that a dict wouldn't be a better idea? Line 447 The form: except ... as ... is in Python versions >= 2.6, but not earlier. Line 464 This use of del just deletes the name from the namespace and won't necessarily call the __del__ method of the 'source' object. It's better to rely on something more explicit like a 'close' method. (If you can't be sure which version of Python it'll use then context managers are probably out anyway!) | https://mail.python.org/pipermail/python-list/2010-October/589503.html | CC-MAIN-2017-30 | refinedweb | 232 | 86.71 |
IN DEPTH
THIRD EDITION
Jon Skeet
FOREWORD BY ERIC LIPPERT
MANNING
Praise for the Second Edition
A masterpiece about C#.
—Kirill Osenkov, Microsoft C# Team
If you are looking to master C# then this book is a must-read.
—Tyson S. Maxwell
Sr. Software Engineer, Raytheon
We're betting that this will be the best C# 4.0 book out there.
—Nikander Bruggeman and Margriet Bruggeman
.NET consultants, Lois & Clark IT Services
A useful and engaging insight into the evolution of C# 4.
—Joe Albahari
Author of LINQPad and C# 4.0 in a Nutshell
One of the best C# books I have ever read.
—Aleksey Nudelman
CEO, C# Computing, LLC
This book should be required reading for all professional C# developers.
—Stuart Caborn
Senior Developer, BNP Paribas
A highly focused, master-level resource on language updates across all major C#
releases. This book is a must-have for the expert developer wanting to stay current with
new features of the C# language.
—Sean Reilly, Programmer/Analyst
Point2 Technologies
Why read the basics over and over again? Jon focuses on the chewy, new stuff!
—Keith Hill, Software Architect
Agilent Technologies
Everything you didn’t realize you needed to know about C#.
—Jared Parsons
Senior Software Development Engineer
Microsoft
Praise for the First Edition
Simply and
really liked the chapter about lambda expressions.
—Jose Rolando Guay Paz
Web Developer, CSW Solutions
This book wraps up the author’s great knowledge of the inner workings of C# and
hands it over to readers in a well-written, concise, usable book.
—Jim Holmes
Author of Windows Developer Power Tools
Every term is used appropriately and in the right context, every example is spot-on
and contains the least amount of code that shows the full extent of the feature...this
is a rare treat.
—Franck Jeannin, Amazon UK reviewer
If you have developed using C# for several years now, and would like to know the internals, this book is absolutely right for you.
—Golo Roden
Author, Speaker, and Trainer for .NET
and related technologies
The best C# book I’ve ever read.
—Chris Mullins, C# MVP
C# in Depth
THIRD EDITION
JON SKE.
20 Baldwin Road
PO Box 261
Shelter Island, NY 11964
Development editor
Copyeditor:
Proofreader:
Typesetter:
Cover designer:
ISBN 9781617291340
Printed in the United States of America
1 2 3 4 5 6 7 8 9 10 – MAL – 18 17 16 15 14 13
Jeff Bleiel
Andy Carroll
Katie Tennant
Dottie Marsico
Marija Tudor
To my boys, Tom, Robin, and William
brief contents
PART 1 PREPARING FOR THE JOURNEY .......................................1
1
■
The changing face of C# development
2
■
Core foundations: building on C# 1 29
3
PART 2 C# 2: SOLVING THE ISSUES OF C# 1 .............................57
3
■
Parameterized typing with generics
59
4
■
Saying nothing with nullable types
105
5
■
Fast-tracked delegates
6
■
Implementing iterators the easy way
7
■
Concluding C# 2: the final features
133
159
182
PART 3 C# 3: REVOLUTIONIZING DATA ACCESS . .....................205
8
■
Cutting fluff with a smart compiler 207
9
■
Lambda expressions and expression trees
10
■
Extension methods 262
11
■
Query expressions and LINQ to Objects
12
■
LINQ beyond collections
vii
328
232
285
viii
BRIEF CONTENTS
PART 4 C# 4: PLAYING NICELY WITH OTHERS .........................369
13
■
Minor changes to simplify code
14
■
371
Dynamic binding in a static language
409
PART 5 C# 5: ASYNCHRONY MADE SIMPLE ..............................461
15
■
Asynchrony with async/await 463
16
■
C# 5 bonus features and closing thoughts
519
contents
foreword xix
preface xxi
acknowledgments xxii
about this book xxiv
about the author xxix
about the cover illustration
xxx
PART 1 PREPARING FOR THE JOURNEY .............................1
1
The changing face of C# development 3
1.1
Starting with a simple data type 4
The Product type in C# 1 5 Strongly typed collections in C# 2 6
Automatically implemented properties in C# 3 7 Named
arguments in C# 4 8
■
■
1.2
Sorting and filtering 9
Sorting products by name
1.3
9
■
12
Handling an absence of data 14
Representing an unknown price
default values 16
1.4
Querying collections
Introducing LINQ
14
■
Optional parameters and
16
Query expressions and in-process queries
XML 18 LINQ to SQL 19
■
ix
17
■
Querying
x
CONTENTS
1.5
COM and dynamic typing
20
Simplifying COM interoperability
dynamic language 21
1.6
1.7
Interoperating with a
■
Writing asynchronous code without the heartache
Dissecting the .NET platform 23
C#, the language
1.8
20
24
Runtime
■
24
Making your code super awesome
22
Framework libraries 24
■
25
Presenting full programs as snippets 25 Didactic code isn’t
production code 26 Your new best friend: the language
specification 27
■
■
1.9
2
Summary
28
Core foundations: building on C# 1 29
2.1
Delegates 30
A recipe for simple delegates 30 Combining and removing
delegates 35 A brief diversion into events 36 Summary of
delegates 37
■
■
2.2
■
Type system characteristics
38
C#’s place in the world of type systems 38 When is C# 1’s type
system not rich enough? 41 Summary of type system
characteristics 44
■
■
2.3
Value types and reference types
44
Values and references in the real world 45 Value and reference
type fundamentals 46 Dispelling myths 47 Boxing and
unboxing 49 Summary of value types and reference types 50
■
■
■
■
2.4
Beyond C# 1: new features on a solid base 51
Features related to delegates 51 Features related to the type
system 53 Features related to value types 55
■
■
2.5
Summary
56
PART 2 C# 2: SOLVING THE ISSUES OF C# 1 ...................57
3
Parameterized typing with generics 59
3.1
3.2
Why generics are necessary 60
Simple generics for everyday use
62
Learning by example: a generic dictionary 62
Generic types and
type parameters 64 Generic methods and reading generic
declarations 67
■
■
xi
CONTENTS
3.3
Beyond the basics 70
Type constraints 71 Type inference for type arguments of generic
methods 76 Implementing generics 77
■
■
3.4
Advanced generics
83
Static fields and static constructors 84 How the JIT compiler
handles generics 85 Generic iteration 87 Reflection and
generics 90
■
■
3.5
■
Limitations of generics in C# and other languages
94
Lack of generic variance 94 Lack of operator constraints or a
“numeric” constraint 99 Lack of generic properties, indexers,
and other member types 101 Comparison with C++
templates 101 Comparison with Java generics 103
■
■
■
■
3.6
4
Summary
104
Saying nothing with nullable types
4.1
105
What do you do when you just don’t have a value? 106
Why value type variables can’t be null 106
Patterns for representing null values in C# 1
4.2
107
System.Nullable<T> and System.Nullable
109
Introducing Nullable<T> 109 Boxing Nullable<T> and
unboxing 112 Equality of Nullable<T> instances 113
Support from the nongeneric Nullable class 114
■
■
4.3
C# 2’s syntactic sugar for nullable types
114
The ? modifier 115 Assigning and comparing with null 116
Nullable conversions and operators 118 Nullable logic 121
Using the as operator with nullable types 123 The null
coalescing operator 123
■
■
■
4.4
Novel uses of nullable types
126
Trying an operation without using output parameters 127
Painless comparisons with the null coalescing operator 129
4.5
5
Summary
131
Fast-tracked delegates 133
5.1
5.2
5.3
Saying goodbye to awkward delegate syntax 134
Method group conversions 136
Covariance and contravariance 137
Contravariance for delegate parameters 138 Covariance of
delegate return types 139 A small risk of incompatibility 141
■
■
xii
CONTENTS
5.4
Inline delegate actions with anonymous methods
142
Starting simply: acting on a parameter 142 Returning values
from anonymous methods 145 Ignoring delegate
parameters 146
■
■
5.5
Capturing variables in anonymous methods
148
Defining closures and different types of variables 148
Examining the behavior of captured variables 149 What’s
the point of captured variables? 151 The extended lifetime of
captured variables 152 Local variable instantiations 153
Mixtures of shared and distinct variables 155 Captured
variable guidelines and summary 156
■
■
■
■
5.6
6
Summary
158
Implementing iterators the easy way
159
6.1
C# 1: The pain of handwritten iterators
160
6.2
C# 2: Simple iterators with yield statements
163
Introducing iterator blocks and yield return 163 Visualizing
an iterator’s workflow 165 Advanced iterator execution
flow 167 Quirks in the implementation 170
■
■
■
6.3
Real-life iterator examples
172
Iterating over the dates in a timetable 172 Iterating over lines in
a file 173 Filtering items lazily using an iterator block and a
predicate 176
■
■
7
6.4
Pseudo-synchronous code with the Concurrency and
Coordination Runtime 178
6.5
Summary
180
Concluding C# 2: the final features 182
7.1
Partial types 183
Creating a type with multiple files 184 Uses of partial
types 186 Partial methods—C# 3 only! 188
■
■
7.2
Static classes 190
7.3
Separate getter/setter property access
7.4
Namespace aliases 193
Qualifying namespace aliases 194
alias 195 Extern aliases 196
■
■
192
The global namespace
xiii
CONTENTS
7.5
Pragma directives
197
Warning pragmas 197
7.6
7.7
Checksum pragmas
■
198
Fixed-size buffers in unsafe code 199
Exposing internal members to selected assemblies 201
Friend assemblies in the simple case 201 Why use
InternalsVisibleTo? 202 InternalsVisibleTo and signed
assemblies 203
■
■
7.8
Summary
204
PART 3 C# 3: REVOLUTIONIZING DATA ACCESS .............205
8
Cutting fluff with a smart compiler
8.1
8.2
207
Automatically implemented properties 208
Implicit typing of local variables 211
Using var to declare a local variable 211 Restrictions on implicit
typing 213 Pros and cons of implicit typing 214
Recommendations 215
■
■
8.3
Simplified initialization
216
Defining some sample types 216 Setting simple properties
Setting properties on embedded objects 219 Collection
initializers 220 Uses of initialization features 223
■
217
■
■
8.4
8.5
Implicitly typed arrays 224
Anonymous types 225
First encounters of the anonymous kind 225 Members of
anonymous types 227 Projection initializers 228 What’s the
point? 229
■
■
8.6
9
Summary
■
231
Lambda expressions and expression trees
9.1
232
Lambda expressions as delegates 234
Preliminaries: Introducing the Func<…> delegate types 234
First transformation to a lambda expression 235 Using a single
expression as the body 236 Implicitly typed parameter lists 236
Shortcut for a single parameter 237
■
■
9.2
Simple examples using List<T> and events
Filtering, sorting, and actions on lists
handler 240
238
■
238
Logging in an event
xiv
CONTENTS
9.3
Expression trees
241
Building expression trees programmatically 242 Compiling
expression trees into delegates 243 Converting C# lambda
expressions to expression trees 244 Expression trees at the heart of
LINQ 248 Expression trees beyond LINQ 249
■
■
■
■
9.4
Changes to type inference and overload resolution 251
Reasons for change: streamlining generic method calls 252
Inferred return types of anonymous functions 253 Two-phase
type inference 254 Picking the right overloaded method 258
Wrapping up type inference and overload resolution 260
■
■
9.5
10
Summary
260
Extension methods 262
10.1
10.2
Life before extension methods 263
Extension method syntax 265
Declaring extension methods 265 Calling extension
methods 267 Extension method discovery 268 Calling a
method on a null reference 269
■
■
10.3
■
Extension methods in .NET 3.5 271
First steps with Enumerable 271 Filtering with Where and
chaining method calls together 273 Interlude: haven’t we seen
the Where method before? 275 Projections using the Select method
and anonymous types 276 Sorting using the OrderBy
method 277 Business examples involving chaining 278
■
■
■
■
■
10.4
Usage ideas and guidelines
280
“Extending the world” and making interfaces richer 280 Fluent
interfaces 280 Using extension methods sensibly 282
■
■
10.5
11
Summary
284
Query expressions and LINQ to Objects 285
11.1
Introducing LINQ
286
Fundamental concepts in LINQ
model 291
11.2
286
■
Defining the sample data
Simple beginnings: selecting elements 292
Starting with a source and ending with a selection 293 Compiler
translations as the basis of query expressions 293 Range
variables and nontrivial projections 296 Cast, OfType, and
explicitly typed range variables 298
■
■
■
xv
CONTENTS
11.3
Filtering and ordering a sequence
300
Filtering using a where clause 300 Degenerate query
expressions 301 Ordering using an orderby clause 302
■
■
11.4
Let clauses and transparent identifiers
304
Introducing an intermediate computation with let
Transparent identifiers 306
11.5
Joins
305
307
Inner joins using join clauses 307 Group joins with join...into
clauses 311 Cross joins and flattening sequences using multiple
from clauses 314
■
■
11.6
Groupings and continuations
318
Grouping with the group...by clause 318
continuations 321
11.7
■
Query
Choosing between query expressions and dot
notation 324
Operations that require dot notation 324 Query expressions
where dot notation may be simpler 325 Where query expressions
shine 325
■
■
11.8
12
Summary
326
LINQ beyond collections 328
12.1
Querying a database with LINQ to SQL 329
Getting started: the database and model 330
queries 332 Queries involving joins 334
■
Initial
■
12.2
Translations using IQueryable and IQueryProvider 336
Introducing IQueryable<T> and related interfaces 337 Faking
it: interface implementations to log calls 338 Gluing expressions
together: the Queryable extension methods 341 The fake query
provider in action 342 Wrapping up IQueryable 344
■
■
■
■
12.3
LINQ-friendly APIs and LINQ to XML 344
Core types in LINQ to XML 345 Declarative construction 347
Queries on single nodes 349 Flattened query operators 351
Working in harmony with LINQ 352
■
■
12.4
Replacing LINQ to Objects with Parallel LINQ
353
Plotting the Mandelbrot set with a single thread 353 Introducing
ParallelEnumerable, ParallelQuery, and AsParallel 354
Tweaking parallel queries 356
■
xvi
CONTENTS
12.5
Inverting the query model with LINQ to Rx 357
IObservable<T> and IObserver<T> 358 Starting simply
(again) 360 Querying observables 360 What’s the
point? 363
■
■
12.6
■
Extending LINQ to Objects
364
Design and implementation guidelines
selecting a random element 365
12.7
Summary
364
■
Sample extension:
367
PART 4 C# 4: PLAYING NICELY WITH OTHERS ...............369
13
Minor changes to simplify code 371
13.1
Optional parameters and named arguments
Optional parameters 372
the two together 382
13.2
Named arguments
■
Improvements for COM interoperability
372
378
Putting
■
387
The horrors of automating Word before C# 4 387 The revenge of
optional parameters and named arguments 388 When is a ref
parameter not a ref parameter? 389 Calling named
indexers 390 Linking primary interop assemblies 391
■
■
■
■
13.3
Generic variance for interfaces and delegates
394
Types of variance: covariance and contravariance 394 Using
variance in interfaces 396 Using variance in delegates 399
Complex situations 399 Restrictions and notes 401
■
■
■
13.4
Teeny tiny changes to locking and field-like events
Robust locking
13.5
14
Summary
405
■
Changes to field-like events 406
407
Dynamic binding in a static language
14.1
405
What? When? Why? How?
409
411
What is dynamic typing? 411 When is dynamic typing useful,
and why? 412 How does C# 4 provide dynamic typing? 413
■
■
14.2
14.3
The five-minute guide to dynamic 414
Examples of dynamic typing 416
COM in general, and Microsoft Office in
particular 417 Dynamic languages such as IronPython 419
Dynamic typing in purely managed code 423
■
xvii
CONTENTS
14.4
Looking behind the scenes 429
Introducing the Dynamic Language Runtime 429 DLR core
concepts 431 How the C# compiler handles dynamic 434
The C# compiler gets even smarter 438 Restrictions on dynamic
code 441
■
■
■
14.5
Implementing dynamic behavior 444
Using ExpandoObject 444
■
Using DynamicObject
Implementing IDynamicMetaObjectProvider
14.6
Summary
448
455
459
PART 5 C# 5: ASYNCHRONY MADE SIMPLE ....................461
15
Asynchrony with async/await 463
15.1
Introducing asynchronous functions
First encounters of the asynchronous kind
the first example 467
15.2
Thinking about asynchrony
Syntax and semantics
■
Breaking down
468
■
Modeling
468
Fundamentals of asynchronous execution
asynchronous methods 471
15.3
465
465
472
Declaring an async method 472 Return types from async
methods 473 The awaitable pattern 474 The flow of await
expressions 477 Returning from an async method 481
Exceptions 482
■
■
■
■
15.4
15.5
Asynchronous anonymous functions 490
Implementation details: compiler transformation 492
Overview of the generated code 493 Structure of the skeleton
method 495 Structure of the state machine 497 One entry
point to rule them all 498 Control around await
expressions 500 Keeping track of a stack 501 Finding out
more 503
■
■
■
■
■
15.6
Using async/await effectively
■
503
The task-based asynchronous pattern 504 Composing async
operations 507 Unit testing asynchronous code 511
The awaitable pattern redux 515 Asynchronous operations in
WinRT 516
■
■
■
15.7
Summary
517
xviii
CONTENTS
16
C# 5 bonus features and closing thoughts 519
16.1
16.2
Changes to captured variables in foreach loops
Caller information attributes 520
520
Basic behavior 521 Logging 522 Implementing
INotifyPropertyChanged 523 Using caller information attributes
without .NET 4.5 524
■
■
■
16.3
appendix A
appendix B
appendix C
Closing thoughts
525
LINQ standard query operators 527
Generic collections in .NET 540
Version summaries 554
index 563
foreword higher can’t
xix
xx
FOREWORD
and capstan wrenches. They take delight and pride in being able to understand the
mechanisms of an instrument that has 5–10,000 a pianist or musician of any sort. But from my email
conversations with him as one of the C# team’s Most Valuable Professionals over the
years, from reading his blog, and from reading every word of each of his books at least
three times, it has become clear to me that Jon is that latter kind of software developer:
enthusiastic, knowledgeable, talented, curious, analytical—and requires new ways of thinking
about data, functions, and the relationship between them. It’s not unlike trying to
play jazz after years of classical training—or vice versa. Either way, I’m looking forward
to finding out what sorts of functional compositions the next generation of C# programmers come up with. Happy composing, and thanks for choosing the key of C# to
do it in.
ERIC LIPPERT
C# ANALYSIS ARCHITECT
COVERITY
preface
Oh boy. When writing this preface, I started off with the preface to the second edition,
which began by saying how long it felt since writing the preface to the first edition.
The second edition is now a distant memory, and the first edition seems like a whole
different life. I’m not sure whether that says more about the pace of modern life or my
memory, but it’s a sobering thought either way.
The development landscape has changed enormously since the first edition, and
even since the second. This has been driven by many factors, with the rise of mobile
devices probably being the most obvious. But many challenges have remained the
same. It’s still hard to write properly internationalized applications. It’s still hard to
handle errors gracefully in all situations. It’s still fairly hard to write correct multithreaded applications, although this task has been made significantly simpler by both
language and library improvements over the years.
Most importantly in the context of this preface, I believe developers still need to
know the language they’re using at a level where they’re confident in how it will
behave. They may not know the fine details of every API call they’re using, or even
some of the obscure corner cases of the language that they don’t happen to use,1 but
the core of the language should feel like a solid friend that the developer can rely on
to behave predictably.
In addition to the letter of the language you’re developing in, I believe there’s great
benefit in understanding its spirit. While you may occasionally find you have a fight on
your hands however hard you try, if you attempt to make your code work in the way the
language designers intended, your experience will be a much more pleasant one.
1
I have a confession to make: I know very little about unsafe code and pointers in C#. I’ve simply never needed
to find out about them.
xxi
acknowledgments
You might expect that putting together a third edition—and one where the main
change consists of two new chapters—would be straightforward. Indeed, writing the
“green field” content of chapters 15 and 16 was the easy part. But there’s a lot more to
it than that—tweaking little bits of language throughout the rest of the book, checking for any aspects which were fine a few years ago but don’t quite make sense now,
and generally making sure the whole book is up to the high standards I expect readers
to hold it to. Fortunately, I have been lucky enough to have a great set of people supporting me and keeping the book on the straight and narrow.
Most importantly, my family have been as wonderful as ever. My wife Holly is a children’s author herself, so our kids are used to us having to lock ourselves away for a
while to meet editorial deadlines, but they’ve remained cheerfully encouraging
throughout. Holly herself takes all of this in stride, and I’m grateful that she’s never
reminded me just how many books she’s started from scratch and completed in the
time I’ve been working on this third edition.
The formal peer reviewers are listed later on, but I’d like to add a note of personal
thanks to all those who ordered early access copies of this third edition, finding typos
and suggesting changes...also constantly asking when the book was coming out. The
very fact that I had readers who were eager to get their hands on the finished book
was a huge source of encouragement.
I always get on well with the team at Manning, and it’s been a pleasure to work with
some familiar friends from the first edition as well as newcomers. Mike Stephens and
Jeff Bleiel have guided the whole process smoothly, as we decided what to change
from the earlier editions and what to keep. They’ve generally put the whole thing into
xxii
ACKNOWLEDGMENTS
xxiii
the right shape. Andy Carroll and Katie Tennant provided expert copyediting and
proofreading, respectively, never once expressing irritation with my Englishness, pickiness, or general bewilderment. The production team has worked its magic in the
background, as ever, but I’m grateful to them nonetheless: Dottie Marsico, Janet Vail,
Marija Tudor, and Mary Piergies. Finally, I’d like to thank the publisher, Marjan Bace,
for allowing me a third edition and exploring some interesting future options.
Peer review is immensely important, not only for getting the technical details of
the book right, but also the balance and tone. Sometimes the comments we received
have merely shaped the overall book; in other cases I’ve made very specific changes in
response. Either way, all feedback has been welcome. So thanks to the following reviewers for making the book better for all of us: Andy Kirsch, Bas Pennings, Bret
Colloff, Charles M. Gross, Dror Helper, Dustin Laine, Ivan Todorović, Jon Parish,
Sebastian Martín Aguilar, Tiaan Geldenhuys, and Timo Bredenoort.
I’d particularly like to thank Stephen Toub and Stephen Cleary, whose early
reviews of chapter 15 were invaluable. Asynchrony is a particularly tricky topic to write
about clearly but accurately, and their expert advice made a very significant difference
to the chapter.
Without the C# team, this book would have no cause to exist, of course. Their dedication to the language in design, implementation and testing is exemplary, and I look
forward to seeing what they come up with next. Since the second edition was published, Eric Lippert has left the C# team for a new fabulous adventure, but I’m enormously grateful that he was still able to act as the tech reviewer for this third edition. I
also thank him for the foreword that he originally wrote to the first edition and that is
included again this time. I refer to Eric’s thoughts on various matters throughout the
book, and if you aren’t already reading his blog (), you really
should be..
Who should read this book?.
If you don’t know any C# at all, this probably isn’t the book for you. You could
struggle through, looking up aspects you’re not familiar with, but it wouldn’t be a very
efficient way of learning. You’d be better off starting with a different book, and then
gradually adding C# in Depth to the mix. There’s a wide variety of books that cover C#
xxiv
ABOUT THIS BOOK
xxv
from scratch, in many different styles. The C# in a Nutshell series (O’Reilly) has always
been good in this respect, and Essential C# 5.0 (Addison-Wesley Professional) is also a
good introduction.
I’m not going to claim that reading this book will make you a fabulous coder.
There’s so much more to software engineering than knowing the syntax of the language you happen to be using. I give some words of guidance, but ultimately there’s a
lot more gut instinct in development than most of us would like to admit. What I will
claim is that if you read and understand this book, you should feel comfortable with
C# and free to follow your instincts without too much apprehension. It’s not about
being able to write code that no one else will understand because it uses unknown corners of the language; it’s about being confident that you know the options available to
you, and know which path the C# idioms are encouraging you to follow.
Roadmap
The book’s structure is simple. There are five parts and three appendixes. The first
part serves as an introduction, including a refresher on topics in C# 1 that are important for understanding later versions of the language, and that are often misunderstood. The second part covers the new features introduced in C# 2, the third part
covers C# 3, and so on.
There are occasions when organizing the material this way means we'll come back
to a topic a couple of times—in particular, delegates are improved in C# 2 and then
again in C# 3—but there is method in my madness. I anticipate that a number of readers will be using different versions for different projects; for example, you may be
using C# 4 at work, but experimenting with C# 5 at home. That means it’s useful to
clarify what is in which version. It also provides a feeling of context and evolution—it
shows how the language has developed over time.
Chapter 1 sets the scene by taking a simple piece of C# 1 code and evolving it, seeing how later versions allow the source to become more readable and powerful. We'll
look at the historical context in which C# has grown, and the technical context in
which it operates as part of a complete platform; C# as a language builds on framework libraries and a powerful runtime to turn abstraction into reality.
Chapter 2 looks back at C# 1, and at three specific aspects: delegates, the type system characteristics, and the differences between value types and reference types.
These topics are often understood “just well enough” by C# 1 developers, but as C#
has evolved and developed them significantly, a solid grounding is required in order
to make the most of the new features.
Chapter 3 tackles the biggest feature of C# 2, and potentially the hardest to grasp:
generics. Methods and types can be written generically, with type parameters standing
in for real types that are specified in the calling code. Initially it’s as confusing as this
description makes it sound, but once you understand generics, you’ll wonder how you
survived without them.
xxvi
ABOUT THIS BOOK
If you’ve ever wanted to represent a null integer, chapter 4 is for you. It introduces
nullable types: a feature, built on generics, that takes advantage of support in the language, runtime, and framework.
Chapter 5 shows the improvements to delegates in C# 2. Until now, you may have
only used delegates for handling events such as button clicks. C# 2 makes it easier to
create delegates, and library support makes them more useful for situations other
than events.
In chapter 6 we'll examine iterators, and the easy way to implement them in C# 2.
Few developers use iterator blocks, but as LINQ to Objects is built on iterators, they’ll
become more and more important. The lazy nature of their execution is also a key
part of LINQ.
Chapter 7 shows a number of smaller features introduced in C# 2, each making life
a little more pleasant. The language designers have smoothed over a few rough places
in C# 1, allowing more flexible interaction with code generators, better support for
utility classes, more granular access to properties, and more.
Chapter 8 once again looks at a few relatively simple features—but this time in C#
3. Almost all the new syntax is geared toward the common goal of LINQ, but the building blocks are also useful in their own right. With anonymous types, automatically
implemented properties, implicitly typed local variables, and greatly enhanced initialization support, C# 3 gives a far richer language with which your code can express its
behavior.
Chapter 9 looks at the first major topic of C# 3—lambda expressions. Not content
with the reasonably concise syntax discussed in chapter 5, the language designers have
made delegates even easier to create than in C# 2. Lambdas are capable of more—
they can be converted into expression trees, a powerful way of representing code as
data.
In chapter 10 we’ll examine extension methods, which provide a way of fooling the
compiler into believing that methods declared in one type actually belong to another.
At first glance this appears to be a readability nightmare, but with careful consideration it can be an extremely powerful feature—and one that’s vital to LINQ.
Chapter 11 combines the previous three chapters in the form of query expressions, a concise but powerful way of querying data. Initially we’ll concentrate on LINQ
to Objects, but you’ll see how the query expression pattern is applied in a way that
allows other data providers to plug in seamlessly.
Chapter 12 is a quick tour of various different uses of LINQ. First we’ll look at the
benefits of query expressions combined with expression trees—how LINQ to SQL is
able to convert what appears to be normal C# into SQL statements. We’ll then move
on to see how libraries can be designed to mesh well with LINQ, taking LINQ to XML
as an example. Parallel LINQ and Reactive Extensions show two alternative
approaches to in-process querying, and the chapter closes with a discussion of how
you can extend LINQ to Objects with your own LINQ operators.
ABOUT THIS BOOK
xxvii
Coverage of C# 4 begins in chapter 13, where we’ll look at named arguments and
optional parameters, COM interop improvements, and generic variance. In some ways
these are very separate features, but named arguments and optional parameters contribute to COM interop as well as the more specific abilities that are only available
when working with COM objects.
Chapter 14 describes the single biggest feature in C# 4: dynamic typing. The ability
to bind members dynamically at execution time instead of statically at compile time is
a huge departure for C#, but it’s applied selectively—only code that involves a
dynamic value will be executed dynamically.
Chapter 15 is all about asynchrony. C# 5 only contains one major feature—the ability to write asynchronous functions. This single feature is simultaneously brainbustingly complicated to understand thoroughly and awe-inspiringly elegant to use. At
long last, we can write asynchronous code that doesn’t read like spaghetti.
We’ll wind down in chapter 16 with the remaining features of C# 5 (both of which
are tiny) and some thoughts about the future.
The appendixes are all reference material. In appendix A, I cover the LINQ standard query operators, with some examples. Appendix B looks at the core generic collection classes and interfaces. Appendix C provides a brief look at the different
versions of .NET, including the different flavors such as the Compact Framework and
Silverlight.
Terminology, typography, and downloads
Most of the terminology of the book is explained as it goes along, but there are a few
definitions that are worth highlighting here. I use C# 1, C# 2, C# 3, C# 4, and C# 5 in
a reasonably obvious manner—but you may see other books and websites referring to
C# 1.0, C# 2.0, C# 3.0, C# 4.0, and C# 5.0. The extra “.0” seems redundant to me,
which is why I’ve omitted it—I hope the meaning is clear.
I’ve appropriated a pair of terms from a C# book by Mark Michaelis. To avoid the
confusion between runtime being an execution environment (as in “the Common Language Runtime”) and a point in time (as in “overriding occurs at runtime”), Mark
uses execution time for the latter concept, usually in comparison with compile time. This
seems to me to be a thoroughly sensible idea, and one that I hope catches on in the
wider community. I’m doing my bit by following his example in this book.
I frequently refer to “the language specification” or just “the specification”—unless
I indicate otherwise, this means the C# language specification. However, multiple versions of the specification are available, partly due to different versions of the language
itself and partly due to the standardization process. Any section numbers provided are
from the C# 5.0 language specification from Microsoft.
This book contains numerous pieces of code, which appear in a fixed-width
font like this; output from the listings appears in the same way. Code annotations
accompany some listings, and at other times particular sections of the code are shown
in bold to highlight a change, improvement, or addition. Almost all of the code
xxviii
ABOUT THIS BOOK
appears in snippet form, allowing it to stay compact but still runnable—within the
right environment. That environment is Snippy, a custom tool that is introduced in
section 1.8. Snippy is available for download, along with all of the code from the book
(in the form of snippets, full Visual Studio solutions, or more often both) from the
book’s website at csharpindepth.com, as well as from the publisher's website at manning.com/CSharpinDepthThirdEdition.
Author Online and the C# in Depth website
Purchase of C# in Depth, Third Edition includes free access to a private web forum run
by Manning Publications where you can make comments about the book, ask technical questions, and receive help from the author and other users. To access the forum
and subscribe to it, point your web browser to. This page provides information on how to get on the forum once you
are registered, what kind of help is available, and the rules of conduct on the forum.
The Author Online forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print.
In addition to Manning’s own website, I have set up a companion website for the
book at csharpindepth.com, containing information that didn’t quite fit into the
book, downloadable source code for all the listings in the book, and links to other
resources.
about the author
I’m not a typical C# developer, I think it’s fair to say. For the last five years, almost all
of my time working with C# has been for fun—effectively as a somewhat obsessive
hobby. At work, I’ve been writing server-side Java in Google London, and I can safely
claim that few things help you to appreciate new language features more than having
to code in a language that doesn’t have them, but is similar enough to remind you of
their absence.
I’ve tried to keep in touch with what other developers find hard about C# by keeping a careful eye on Stack Overflow, posting oddities to my blog, and occasionally talking about C# and related topics just about anywhere that will provide people to listen
to me. Additionally, I’m actively developing an open source .NET date and time API
called Noda Time (see). In short, C# is still coursing through
my veins as strongly as ever.
For all these oddities—and despite my ever-surprising micro-celebrity status due to
Stack Overflow—I’m a very ordinary developer in many other ways. I write plenty of
code that makes me grimace when I come back to it. My unit tests don’t always come
first...and sometimes they don’t even exist. I make off-by-one errors every so often.
The type inference section of the C# specification still confuses me, and there are
some uses of Java wildcards that make me want to have a little lie-down. I’m a deeply
flawed programmer.
That’s the way it should be. For the next few hundred pages, I’ll try to pretend otherwise: I’ll espouse best practices as if I always followed them myself, and frown on
dirty shortcuts as if I’d never dream of taking them. Don’t believe a word of it. The
truth of the matter is, I’m probably just like you. I happen to know a bit more about
how C# works, that’s all...and even that state of affairs will only last until you’ve finished the book.
xxix
about the cover illustration
The caption for the illustration on the cover of C# in Depth, Third Edition is “Musician.” simply
Part 1
Preparing for the journey
E
very reader will come to this book with a different set of expectations and
a different level of experience. Are you an expert looking to fill some holes, however small, in your present knowledge? Perhaps you consider yourself an average
developer, with a bit of experience in using generics and lambda expressions,
but a desire to better understand how they work. Maybe you’re reasonably confident with C# 2 and 3 but have no experience with C# 4 or 5.
As an author, I can’t make every reader the same—and I wouldn’t want to,
even if I could. But I hope that all readers have two things in common: the
desire for a deeper relationship with C# as a language, and at least a basic knowledge of C# 1. If you can bring those elements to the party, I’ll provide the rest.
The potentially huge range of skill levels is the main reason why this part of
the book exists. You may already know what to expect from later versions of C#—
or it could all be brand new to you. You could have a rock-solid understanding of
C# 1, or you might be rusty on some of the details—some of which will become
increasingly important as you learn about the later versions. By the end of part 1,
I won’t have leveled the playing field entirely, but you should be able to approach
the rest of the book with confidence and an idea of what’s coming later.
In the first two chapters, we’ll look both forward and back. One of the key
themes of the book is evolution. Before introducing any feature into the language, the C# design team carefully considers that feature in the context of
what’s already present and the general goals for the future. This brings a feeling
of consistency to the language even in the midst of change. To understand how
and why the language is evolving, you need to see where it’s come from and
where it’s going.
Chapter 1 presents a bird’s-eye view of the rest of the book, taking a brief look at
some of the biggest features of C# beyond version 1. I’ll show a progression of code
from C# 1 onward, applying new features one by one until the code is almost unrecognizable from its humble beginnings. We’ll also look at some of the terminology I’ll use
in the rest of the book, as well as the format for the sample code.
Chapter 2 is heavily focused on C# 1. If you’re an expert in C# 1, you can skip this
chapter, but it does tackle some of the areas of C# 1 that tend to be misunderstood.
Rather than try to explain the whole of the language, the chapter concentrates on features that are fundamental to the later versions of C#. From this solid base, you can
move on and look at C# 2 in part 2 of the book.
The changing face
of C# development
This chapter covers
An evolving example
The composition of .NET
Using the code in this book
The C# language specification
Do you know what I really like about dynamic languages such as Python, Ruby, and
Groovy? They suck away fluff from your code, leaving just the essence of it—the bits
that really do something. Tedious formality gives way to features such as generators,
lambda expressions, and list comprehensions.
The interesting thing is that few of the features that tend to give dynamic languages their lightweight feel have anything to do with being dynamic. Some do, of
course—duck typing and some of the magic used in Active Record, for example—
but statically typed languages don’t have to be clumsy and heavyweight.
Enter C#. In some ways, C# 1 could have been seen as a nicer version of the Java
language, circa 2001. The similarities were all too clear, but C# had a few extras:
properties as a first-class feature in the language, delegates and events, foreach
3
4
CHAPTER 1
The changing face of C# development
loops, using statements, explicit method overriding, operator overloading, and custom value types, to name a few. Obviously, language preference is a personal issue, but
C# 1 definitely felt like a step up from Java when I first started using it.
Since then, things have only gotten better. Each new version of C# has added significant features to reduce developer angst, but always in a carefully considered way,
and with little backward incompatibility. Even before C# 4 gained the ability to use
dynamic typing where it’s genuinely useful, many features traditionally associated with
dynamic and functional languages had made it into C#, leading to code that’s easier
to write and maintain. Similarly, while the features around asynchrony in C# 5 aren’t
exactly the same as those in F#, it feels to me like there’s a definite influence.
In this book, I’ll take you through those changes one by one, in enough detail to
make you feel comfortable with some of the miracles the C# compiler is now prepared
to perform on your behalf. All that comes later, though—in this chapter I’ll whiz
through as many features as I can, barely taking a breath. I’ll define what I mean when
I talk about C# as a language compared with .NET as a platform, and I’ll offer a few
important notes about the sample code for the rest of the book. Then we can dive into
the details.
We won’t be looking at all the changes made to C# in this single chapter, but you’ll
see generics, properties with different access modifiers, nullable types, anonymous
methods, automatically implemented properties, enhanced collection initializers,
enhanced object initializers, lambda expressions, extension methods, implicit typing,
LINQ query expressions, named arguments, optional parameters, simpler COM
interop, dynamic typing, and asynchronous functions. These will carry us from C# 1
all the way up to the latest release, C# 5. Obviously that’s a lot to get through, so let’s
1.1
Starting with a simple data type
In this chapter I’ll let the C# compiler do amazing things without telling you how and
barely mentioning the what or the why. This is the only time that I won’t explain how
things work or try to go one step at a time. Quite the opposite, in fact—the plan is to
impress rather than educate. If you read this entire section without getting at least a
little excited about what C# can do, maybe this book isn’t for you. With any luck,
though, you’ll be eager to get to the details of how these magic tricks work, and that’s
what the rest of the book is for.
The example I’ll use is contrived—it’s designed to pack as many new features into
as short a piece of code as possible. It’s also clichéd, but at least that makes it familiar.
Yes, it’s a product/name/price example, the e-commerce alternative to “hello, world.”
We’ll look at how various tasks can be achieved, and how, as we move forward in versions of C#, you can accomplish them more simply and elegantly than before. You
won’t see any of the benefits of C# 5 until right at the end, but don’t worry—that
doesn’t make it any less important.
Starting with a simple data type
1.1.1
5
The Product type in C# 1
We’ll start off with a type representing a product, and then manipulate it. You won’t
see anything particularly impressive yet—just the encapsulation of a couple of properties. To make life simpler for demonstration purposes, this is also where we’ll create a
list of predefined products.
Listing 1.1 shows the type as it might be written in C# 1. We’ll then move on to see
how the code might be rewritten for each later version. This is the pattern we’ll follow
for each of the other pieces of code. Given that I’m writing this in 2013, it’s likely that
you’re already familiar with code that uses some of the features I’ll introduce, but it’s
worth looking back so you can see how far the language has come.
Listing 1.1
The Product type (C# 1)
using System.Collections;
public class Product
{
string name;
public string Name { get { return name; } }
decimal price;
public decimal Price { get { return price; } }
public Product(string name, decimal price)
{
this.name = name;
this.price = price;
}
public static ArrayList GetSampleProducts()
{
ArrayList list = new ArrayList(););
}
}
Nothing in listing 1.1 should be hard to understand—it’s just C# 1 code, after all.
There are three limitations that it demonstrates, though:
An ArrayList has no compile-time information about what’s in it. You could
accidentally add a string to the list created in GetSampleProducts, and the compiler wouldn’t bat an eyelid.
You’ve provided public getter properties, which means that if you wanted
matching setters, they’d have to be public, too.
6
CHAPTER 1
The changing face of C# development
There’s a lot of fluff involved in creating the properties and variables—code
that complicates the simple task of encapsulating a string and a decimal.
Let’s see what C# 2 can do to improve matters.
1.1.2
Strongly typed collections in C# 2
Our first set of changes (shown in the following listing) tackles the first two items
listed previously, including the most important change in C# 2: generics. The parts
that are new are in bold.
Listing 1.2
Strongly typed collections and private setters (C# 2)
public class Product
{
string name;
public string Name
{
get { return name; }
private set { name = value; }
}
decimal price;
public decimal Price
{
get { return price; }
private set { price = value; }
}
public Product(string name, decimal price)
{
Name = name;
Price = price;
}
public static List<Product> GetSampleProducts()
{
List<Product> list = new List<Product>(););
}
}
You now have properties with private setters (which you use in the constructor), and it
doesn’t take a genius to guess that List<Product> is telling the compiler that the list
contains products. Attempting to add a different type to the list would result in a compiler error, and you also don’t need to cast the results when you fetch them from the list.
Starting with a simple data type
7
The changes in C# 2 leave only one of the original three difficulties unanswered,
and C# 3 helps out there.
1.1.3
Automatically implemented properties in C# 3
We’re starting off with some fairly tame features from C# 3. The automatically implemented properties and simplified initialization shown in the following listing are relatively trivial compared with lambda expressions and the like, but they can make code a
lot simpler.
Listing 1.3
Automatically implemented properties and simpler initialization (C# 3)
using System.Collections.Generic;
class Product
{
public string Name { get; private set; }
public decimal Price { get; private set; }
public Product(string name, decimal price)
{
Name = name;
Price = price;
}
Product() {});
}
}
Now the properties don’t have any code (or visible variables!) associated with them,
and you’re building the hardcoded list in a very different way. With no name and price
variables to access, you’re forced to use the properties everywhere in the class, improving consistency. You now have a private parameterless constructor for the sake of the
new property-based initialization. (This constructor is called for each item before the
properties are set.)
In this example, you could’ve removed the public constructor completely, but then
no outside code could’ve created other product instances.
8
CHAPTER 1
1.1.4
The changing face of C# development
Named arguments in C# 4
For C# 4, we’ll go back to the original code when it comes to the properties and constructor, so that it’s fully immutable again. A type with only private setters can’t be publicly mutated, but it can be clearer if it’s not privately mutable either.1 There’s no
shortcut for read-only properties, unfortunately, but C# 4 lets you specify argument
names for the constructor call, as shown in the following listing, which gives you the
clarity of C# 3 initializers without the mutability.
Listing 1.4
Named arguments for clear initialization code (C# 4)
using System.Collections.Generic;
public class Product
{
readonly string name;
public string Name { get { return name; } }
readonly decimal price;
public decimal Price { get { return price; } }
public Product(string name, decimal price)
{
this.name = name;
this.price = price;
});
}
}
The benefits of specifying the argument names explicitly are relatively minimal in this
particular example, but when a method or constructor has several parameters, it can
make the meaning of the code much clearer—particularly if they’re of the same type,
or if you’re passing in null for some arguments. You can choose when to use this
feature, of course, only specifying the names for arguments when it makes the code
easier to understand.
Figure 1.1 summarizes how the Product type has evolved so far. I’ll include a similar diagram after each task, so you can see the pattern of how the evolution of C#
1
The C# 1 code could’ve been immutable too—I only left it mutable to simplify the changes for C# 2 and 3.
9
Sorting and filtering
C# 1
C# 2
Read-only properties
Weakly typed collections
Private property setters
Strongly typed collections
C# 3
Automatically implemented
properties
Enhanced collection and
object initialization
C# 4
Named arguments for
clearer constructor
and method calls
Figure 1.1 Evolution of
the Product type,
showing greater
encapsulation, stronger
typing, and ease of
initialization over time
improves the code. You’ll notice that C# 5 is missing from all of the block diagrams;
that’s because the main feature of C# 5 (asynchronous functions) is aimed at an area
that really hasn’t evolved much in terms of language support. We’ll take a peek at it
before too long, though.
So far, the changes are relatively minimal. In fact, the addition of generics (the
List<Product> syntax) is probably the most important part of C# 2, but you’ve only
seen part of its usefulness so far. There’s nothing to get the heart racing yet, but we’ve
only just started. Our next task is to print out the list of products in alphabetical order.
1.2
Sorting and filtering
In this section, we won’t change the Product type at all—instead, we’ll take the sample
products and sort them by name, and then find the expensive ones. Neither of these
tasks is exactly difficult, but you’ll see how much simpler they become over time.
1.2.1
Sorting products by name
The easiest way to display a list in a particular order is to sort the list and then run
through it, displaying items. In .NET 1.1, this involved using ArrayList.Sort, and
optionally providing an IComparer implementation to specify a particular comparison. You could make the Product type implement IComparable, but that would only
allow you to define one sort order, and it’s not a stretch to imagine that you might
want to sort by price at some stage, as well as by name.
The following listing implements IComparer, and then sorts the list and displays it.
Listing 1.5
Sorting an ArrayList using IComparer (C# 1)
class ProductNameComparer : IComparer
{
public int Compare(object x, object y)
{
Product first = (Product)x;
Product second = (Product)y;
return first.Name.CompareTo(second.Name);
10
CHAPTER 1
The changing face of C# development
}
}
...
ArrayList products = Product.GetSampleProducts();
products.Sort(new ProductNameComparer());
foreach (Product product in products)
{
Console.WriteLine (product);
}
The first thing to spot in listing 1.5 is that you had to introduce an extra type to help
with the sorting. That’s not a disaster, but it’s a lot of code if you only want to sort by
name in one place. Next, look at the casts in the Compare method. Casts are a way of
telling the compiler that you know more information than it does, and that usually
means there’s a chance you’re wrong. If the ArrayList you returned from GetSampleProducts did contain a string, that’s where the code would go bang—where the
comparison tries to cast the string to a Product.
You also have a cast in the code that displays the sorted list..
Listing 1.6
Sorting a List<Product> using IComparer<Product> (C# 2)
class ProductNameComparer : IComparer<Product>
{
public int Compare(Product x, Product y)
{
return x.Name.CompareTo(y.Name);
}
}
...
List<Product> products = Product.GetSampleProducts();
products.Sort(new ProductNameComparer());
foreach (Product product in products)
{
Console.WriteLine(product);
}
The code for the comparer in listing 1.6 is simpler because you’re given products to
start with. No casting is necessary. Similarly, the invisible cast in the foreach loop is
effectively gone now. The compiler still has to consider the conversion from the
source type of the sequence to the target type of the variable, but it knows that in this
case both types are Product, so it doesn’t need to emit any code for the conversion.
That’s an improvement, but it’d be nice if you could sort the products by simply
specifying the comparison to make, without needing to implement an interface to do
so. The following listing shows how to do precisely this, telling the Sort method how
to compare two products using a delegate.
Sorting and filtering
Listing 1.7
11
Sorting a List<Product> using Comparison<Product> (C# 2)
List<Product> products = Product.GetSampleProducts();
products.Sort(delegate(Product x, Product y)
{ return x.Name.CompareTo(y.Name); }
);
foreach (Product product in products)
{
Console.WriteLine(product);
}
Behold the lack of the ProductNameComparer type. The statement in bold font creates
a delegate instance, which you provide to the Sort method in order to perform the
comparisons. You’ll learn more about this feature (anonymous methods) in chapter 5.
You’ve now fixed all the problems identified in the C# 1 version. That doesn’t
mean that C# 3 can’t do better, though. First, you’ll replace the anonymous method
with an even more compact way of creating a delegate instance, as shown in the following listing.
Listing 1.8
Sorting using Comparison<Product> from a lambda expression (C# 3)
List<Product> products = Product.GetSampleProducts();
products.Sort((x, y) => x.Name.CompareTo(y.Name));
foreach (Product product in products)
{
Console.WriteLine(product);
}
You’ve gained even more strange syntax (a lambda expression), which still creates a
Comparison<Product> delegate, just as listing 1.7 did, but this time with less fuss. You
didn’t have to use the delegate keyword to introduce it, or even specify the types of
the parameters.
There’s more, though: with C# 3, you can easily print out the names in order without modifying the original list of products. The next listing shows this using the
OrderBy method.
Listing 1.9
Ordering a List<Product> using an extension method (C# 3)
List<Product> products = Product.GetSampleProducts();
foreach (Product product in products.OrderBy(p => p.Name) )
{
Console.WriteLine (product);
}
In this listing, you appear to be calling an OrderBy method on the list, but if you look
in MSDN, you’ll see that it doesn’t even exist in List<Product>. You’re able to call it
due to the presence of an extension method, which you’ll see in more detail in chapter
10. You’re not actually sorting the list “in place” anymore, just retrieving the contents
12
CHAPTER 1
The changing face of C# development
C# 1
C# 2
C# 3
Weakly typed comparator
No delegate sorting option
Strongly typed comparator
Delegate comparisons
Anonymous methods
Lambda expressions
Extension methods
Option of leaving list unsorted
Figure 1.2
Features involved in making sorting easier in C# 2 and 3
of the list in a particular order. Sometimes you’ll need to change the actual list; sometimes an ordering without any other side effects is better.
The important point is that this code is much more compact and readable (once
you understand the syntax, of course). We wanted the list ordered by name, and that’s
exactly what the code says. It doesn’t say to sort by comparing the name of one product with the name of another, like the C# 2 code did, or to sort by using an instance of
another type that knows how to compare one product with another. It just says to
order by name. This simplicity of expression is one of the key benefits of C# 3. When
the individual pieces of data querying and manipulation are so simple, larger transformations can remain compact and readable in one piece of code. That, in turn,
encourages a more data-centric way of looking at the world.
You’ve seen more of the power of C# 2 and 3 in this section, with a lot of (as yet)
unexplained syntax, but even without understanding the details you can see the progress toward clearer, simpler code. Figure 1.2 shows that evolution.
That’s it for sorting.2 Let’s do a different form of data manipulation now—querying.
1.2.2
Querying collections
Your next task is to find all the elements of the list that match a certain criterion—in
particular, those with a price greater than $10. The following listing shows how, in C# 1,
you need to loop around, testing each element and printing it out when appropriate.
Listing 1.10
Looping, testing, printing out (C# 1)
ArrayList products = Product.GetSampleProducts();
foreach (Product product in products)
{
if (product.Price > 10m)
{
Console.WriteLine(product);
}
}
This code is not difficult to understand. But it’s worth bearing in mind how intertwined the three tasks are—looping with foreach, testing the criterion with if, and
2
C# 4 does provide one feature that can be relevant when sorting, called generic variance, but giving an example
here would require too much explanation. You can find the details near the end of chapter 13.
Sorting and filtering
13
then displaying the product with Console.WriteLine. The dependency is obvious
because of the nesting.
The following listing demonstrates how C# 2 lets you flatten things out a bit.
Listing 1.11
Separating testing from printing (C# 2)
List<Product> products = Product.GetSampleProducts();
Predicate<Product> test = delegate(Product p) { return p.Price > 10m; };
List<Product> matches = products.FindAll(test);
Action<Product> print = Console.WriteLine;
matches.ForEach(print);
The test variable is initialized using the anonymous method feature you saw in the
previous section. The print variable initialization uses another new C# 2 feature
called method group conversions that makes it easier to create delegates from existing
methods.
I’m not going to claim that this code is simpler than the C# 1 code, but it is a lot
more powerful.3
In particular, the technique of separating the two concerns like this makes it very
easy to change the condition you’re testing for and the action you take on each of the
matches independently. The delegate variables involved (test and print) could be
passed into a method, and that same method could end up testing radically different
conditions and taking radically different actions. Of course, you could put all the testing and printing into one statement, as shown in the following listing.
Listing 1.12
Separating testing from printing redux (C# 2)
List<Product> products = Product.GetSampleProducts();
products.FindAll(delegate(Product p) { return p.Price > 10;})
.ForEach(Console.WriteLine);
In some ways, this version is better, but the delegate(Product p) is getting in the
way, as are the braces. They’re adding noise to the code, which hurts readability. I still
prefer the C# 1 version in cases where I only ever want to use the same test and perform the same action. (It may sound obvious, but it’s worth remembering that there’s
nothing stopping you from using the C# 1 code with a later compiler version. You
wouldn’t use a bulldozer to plant tulip bulbs, which is the kind of overkill used in the
last listing.)
The next listing shows how C# 3 improves matters dramatically by removing a lot
of the fluff surrounding the actual logic of the delegate.
3
In some ways, this is cheating. You could’ve defined appropriate delegates in C# 1 and called them within the
loop. The FindAll and ForEach methods in .NET 2.0 just encourage you to consider separation of concerns.
14
CHAPTER 1
Listing 1.13
The changing face of C# development
Testing with a lambda expression (C# 3)
List<Product> products = Product.GetSampleProducts();
foreach (Product product in products.Where(p => p.Price > 10))
{
Console.WriteLine(product);
}
The combination of the lambda expression putting the test in just the right place and
a well-named method means you can almost read the code out loud and understand it
without thinking. You still have the flexibility of C# 2—the argument to Where could
come from a variable, and you could use an Action<Product> instead of the hardcoded Console.WriteLine call if you wanted to.
This task has emphasized what you already knew from sorting—anonymous methods make writing a delegate simple, and lambda expressions are even more concise.
In both cases, that brevity means that you can include the query or sort operation
inside the first part of the foreach loop without losing clarity.
Figure 1.3 summarizes the changes we’ve just looked at. C# 4 doesn’t offer anything to simplify this task any further.
C# 1
C# 2
C# 3
Strong coupling between
condition and action.
Both are hardcoded.
Separate condition from
action invoked.
Anonymous methods
make delegates simple.
Lambda expressions
make the condition
even easier to read.
Figure 1? How can you cope with that within the Product class?
1.3
Handling an absence of data
We’ll look at two different forms of missing data. First we’ll deal with the scenario
where you genuinely don’t have the information, and then see how you can actively
remove information from method calls, using default values.
1.3.1
Representing an unknown price
I won’t present much code this time, but I’m sure it’ll be a familiar problem to you,
especially if you’ve done a lot of work with databases. Imagine your list of products
contains not just products on sale right now, but ones that aren’t available yet. In some
cases, you may not know the price. If decimal were a reference type, you could just use
null to represent the unknown price, but since it’s a value type, you can’t. How would
you represent this in C# 1?
Handling an absence of data
15
There are three common alternatives:
Create a reference type wrapper around decimal.
Maintain a separate Boolean flag indicating whether the price is known.
Use a “magic value” (decimal.MinValue, for example) to represent the
unknown price.
I hope you’ll agree that none of these holds much appeal. Time for a little magic: you
can solve the problem by adding a single character in the variable and property declarations. .NET 2.0 makes matters a lot simpler by introducing the Nullable<T> structure, and C# 2 provides some additional syntactic sugar that lets you change the
property declaration to this block of code:
decimal? price;
public decimal? Price
{
get { return price; }
private set { price = value; }
}
The constructor parameter changes to decimal?, and then you can pass in null as the
argument, or say Price = null; within the class. The meaning of the null changes
from “a special reference that doesn’t refer to any object” to “a special value of any
nullable type representing the absence of other data,” where all reference types and
all Nullable<T>-based types count as nullable types.
That’s a lot more expressive than any of the other solutions. The rest of the code
works as is—a product with an unknown price will be considered to be less expensive
than $10, due to the way nullable values are handled in greater-than comparisons. To
check whether a price is known, you can compare it with null or use the HasValue
property, so to show all the products with unknown prices in C# 3, you’d write the following code.
Listing 1.14
Displaying products with an unknown price (C# 3)
List<Product> products = Product.GetSampleProducts();
foreach (Product product in products.Where(p => p.Price == null))
{
Console.WriteLine(product.Name);
}
The C# 2 code would be similar to that in listing 1.12, but you’d need to check for
null in the anonymous method:
List<Product> products = Product.GetSampleProducts();
products.FindAll(delegate(Product p) { return p.Price == null; })
.ForEach(Console.WriteLine);
C# 3 doesn’t offer any changes here, but C# 4 has a feature that’s at least tangentially
related.
16
1.3.2
CHAPTER 1
The changing face of C# development
Optional parameters and default values
Sometimes you don’t want to tell a method everything it needs to know, such as when
you almost always use the same value for a particular parameter. Traditionally the
solution has been to overload the method in question, but C# 4 introduced optional
parameters to make this simpler.
In the C# 4 version of the Product type, you have a constructor that takes the name
and the price. You can make the price a nullable decimal, just as in C# 2 and 3, but
let’s suppose that most of the products don’t have prices. It would be nice to be able to
initialize a product like this:
Product p = new Product("Unreleased product");
Prior to C# 4, you would’ve had to introduce a new overload in the Product constructor for this purpose. C# 4 allows you to declare a default value (in this case, null) for
the price parameter:
public Product(string name, decimal? price = null)
{
this.name = name;
this.price = price;
}
You always have to specify a constant value when you declare an optional parameter. It
doesn’t have to be null; that just happens to be the appropriate default in this situation. The requirement that the default value is a constant applies to any type of
parameter, although for reference types other than strings you are limited to null as
the only constant value available.
Figure 1.4 summarizes the evolution we’ve looked at across different versions of C#.
C# 1
C# 2 / 3
Choice between extra work
maintaining a flag, changing
to reference type semantics,
or the hack of a magic value.
Nullable types make the
"extra work" option simple,
and syntactic sugar improves
matters even further.
Figure 1.4
C# 4
Optional parameters
allow simple defaulting.
Options for working with missing data
So far the features have been useful, but perhaps nothing to write home about. Next
we’ll look at something rather more exciting: LINQ.
1.4
Introducing LINQ
LINQ (Language-Integrated Query) is at the heart of the changes in C# 3. As its name
suggests, LINQ is all about queries—the aim is to make it easy to write queries against
multiple data sources with consistent syntax and features, in a readable and composable fashion.
Introducing LINQ
17
Whereas the features in C# 2 are arguably more about fixing annoyances in C# 1
than setting the world on fire, almost everything in C# 3 builds toward LINQ, and the
result is rather special. I’ve seen features in other languages that tackle some of the
same areas as LINQ, but nothing quite so well-rounded and flexible.
1.4.1
Query expressions and in-process queries
If you’ve seen any LINQ code before, you’ve probably seen query expressions that allow
you to use a declarative style to create queries on various data sources. The reason
none of this chapter’s examples have used query expressions so far is that the examples have all been simpler without using the extra syntax. That’s not to say you couldn’t
use it anyway—the following listing, for example, is equivalent to listing 1.13.
Listing 1.15
First steps with query expressions: filtering a collection
List<Product> products = Product.GetSampleProducts();
var filtered = from Product p in products
where p.Price > 10
select p;
foreach (Product product in filtered)
{
Console.WriteLine(product);
}
Personally, I find the earlier listing easier to read—the only benefit to this query
expression version is that the where clause is simpler. I’ve snuck in one extra feature
here—implicitly typed local variables, which are declared using the var contextual keyword. These allow the compiler to infer the type of a variable from the value that it’s
initially assigned—in this case, the type of filtered is IEnumerable<Product>. I’ll use
var fairly extensively in the rest of the examples in this chapter; it’s particularly useful
in books, where space in listings is at a premium.
But if query expressions are no good, why does everyone make such a fuss about
them, and about LINQ in general? The first answer is that although query expressions
aren’t particularly beneficial for simple tasks, they’re very good for more complicated
situations that would be hard to read if written out in the equivalent method calls
(and would be fiendish in C# 1 or 2). Let’s make things a little harder by introducing
another type—Supplier.
Each supplier has a Name (string) and a SupplierID (int). I’ve also added
SupplierID as a property in Product and adapted the sample data appropriately.
Admittedly that’s not a very object-oriented way of giving each product a supplier—it’s
much closer to how the data would be represented in a database. It does make this
particular feature easier to demonstrate for now, but you’ll see in chapter 12 that
LINQ allows you to use a more natural model too.
Now let’s look at the code (listing 1.16) that joins the sample products with the
sample suppliers (obviously based on the supplier ID), applies the same price filter as
before to the products, sorts by supplier name and then product name, and prints out
18
CHAPTER 1
The changing face of C# development
the name of both the supplier and the product for each match. That was a mouthful,
and in earlier versions of C# it would’ve been a nightmare to implement. In LINQ, it’s
almost trivial.
Listing 1.16
Joining, filtering, ordering, and projecting (C# 3)
List<Product> products = Product.GetSampleProducts();
List<Supplier> suppliers = Supplier.GetSampleSuppliers();
var filtered = from p in products
join s in suppliers
on p.SupplierID equals s.SupplierID
where p.Price > 10
orderby s.Name, p.Name
select new { SupplierName = s.Name, ProductName = p.Name };
foreach (var v in filtered)
{
Console.WriteLine("Supplier={0}; Product={1}",
v.SupplierName, v.ProductName);
}
You might have noticed that this looks remarkably like SQL. Indeed, the reaction of
many people on first hearing about LINQ (but before examining it closely) is to reject
it as merely trying to put SQL into the language for the sake of talking to databases.
Fortunately, LINQ has borrowed the syntax and some ideas from SQL, but as you’ve
seen, you needn’t be anywhere near a database in order to use it. None of the code
you’ve seen so far has touched a database at all. Indeed, you could be getting data
from any number of sources: XML, for example.
1.4.2
Querying XML
Suppose that instead of hardcoding your suppliers and products, you’d used the following XML file:
<?xml version="1.0"?>
<Data>
<Products>
<Product Name="West Side Story" Price="9.99" SupplierID="1" />
<Product Name="Assassins" Price="14.99" SupplierID="2" />
<Product Name="Frogs" Price="13.99" SupplierID="1" />
<Product Name="Sweeney Todd" Price="10.99" SupplierID="3" />
</Products>
<Suppliers>
<Supplier Name="Solely Sondheim" SupplierID="1" />
<Supplier Name="CD-by-CD-by-Sondheim" SupplierID="2" />
<Supplier Name="Barbershop CDs" SupplierID="3" />
</Suppliers>
</Data>
The file is simple enough, but what’s the best way of extracting the data from it? How
do you query it? Join on it? Surely it’s going to be somewhat harder than what you did
in listing 1.16, right? The following listing shows how much work you have to do in
LINQ to XML.
Introducing LINQ
Listing 1.17
19
Complex processing of an XML file with LINQ to XML (C# 3)
XDocument doc = XDocument.Load("data.xml");
var filtered = from p in doc.Descendants("Product")
join s in doc.Descendants("Supplier")
on (int)p.Attribute("SupplierID")
equals (int)s.Attribute("SupplierID")
where (decimal)p.Attribute("Price") > 10
orderby (string)s.Attribute("Name"),
(string)p.Attribute("Name")
select new
{
SupplierName = (string)s.Attribute("Name"),
ProductName = (string)p.Attribute("Name")
};
foreach (var v in filtered)
{
Console.WriteLine("Supplier={0}; Product={1}",
v.SupplierName, v.ProductName);
}
This approach isn’t quite as straightforward, because you need to tell the system how
it should understand the data (in terms of what attributes should be used as what
types), but it’s not far off. In particular, there’s an obvious relationship between each
part of the two listings. If it weren’t for the line-length limitations of books, you’d see
an exact line-by-line correspondence between the two queries.
Impressed yet? Not quite convinced? Let’s put the data where it’s much more likely
to be—in a database.
1.4.3
LINQ to SQL
There’s some work involved in letting LINQ to SQL know what to expect in what table,
but it’s all fairly straightforward and much of it can be automated. We’ll skip straight
to the querying code, which is shown in the following listing. If you want to see the
details of LinqDemoDataContext, they’re all in the downloadable source code.
Listing 1.18
Applying a query expression to a SQL database (C# 3)
using (LinqDemoDataContext db = new LinqDemoDataContext())
{
var filtered = from p in db.Products
join s in db.Suppliers
on p.SupplierID equals s.SupplierID
where p.Price > 10
orderby s.Name, p.Name
select new { SupplierName = s.Name, ProductName = p.Name };
foreach (var v in filtered)
{
Console.WriteLine("Supplier={0}; Product={1}",
v.SupplierName, v.ProductName);
}
}
20
CHAPTER 1
The changing face of C# development
By now, this should be looking incredibly familiar. Everything below the join line is
cut and pasted directly from listing 1.16 with no changes.
That’s impressive enough, but if you’re performance-conscious, you may be wondering why you’d want to pull down all the data from the database and then apply
these .NET queries and orderings. Why not get the database to do it? That’s what it’s
good at, isn’t it? Well, indeed—and that’s exactly what LINQ to SQL does. The code in
listing 1.18 issues a database request, which is basically the query translated into SQL.
Even though you’ve expressed the query in C# code, it’s been executed as SQL.
You’ll see later that there’s a more relation-oriented way of approaching this kind
of join when the schema and the entities know about the relationship between suppliers and products. The result is the same, though, and it shows just how similar LINQ to
Objects (the in-memory LINQ operating on collections) and LINQ to SQL can be.
expect in terms of querying collections.
1.5
COM and dynamic typing
Next, I’d like to demonstrate some features that are specific to C# 4. Whereas LINQ
was the major focus of C# 3, interoperability was the biggest theme in C# 4. This
includes working with both the old technology of COM and also the brave new world
of dynamic languages executing on the Dynamic Language Runtime (DLR). We’ll start
by exporting the product list to an Excel spreadsheet.
1.5.1
Simplifying COM interoperability
There are various ways of making data available to Excel, but using COM to control it
gives you the most power and flexibility. Unfortunately, previous incarnations of C#
made it quite difficult to work with COM; VB had much better support. C# 4 largely
rectifies that situation.
The following listing shows some code to save your data to a new spreadsheet.
Listing 1.19
Saving data to Excel using COM (C# 4)
var app = new Application { Visible = false };
Workbook workbook = app.Workbooks.Add();
Worksheet worksheet = app.ActiveSheet;
int row = 1;
foreach (var product in Product.GetSampleProducts()
.Where(p => p.Price != null))
{
worksheet.Cells[row, 1].Value = product.Name;
worksheet.Cells[row, 2].Value = product.Price;
row++;
}
workbook.SaveAs(Filename: "demo.xls",
FileFormat: XlFileFormat.xlWorkbookNormal);
app.Application.Quit();
COM and dynamic typing
21
This may not be quite as nice as you’d like, but it’s a lot better than it would’ve been
using earlier versions of C#. In fact, you already know about some of the C# 4 features shown here—but there are a couple of others that aren’t so obvious. Here’s the
full list:
The SaveAs call uses named arguments.
Various calls omit arguments for optional parameters—in particular, SaveAs
would normally have an extra 10 arguments!
C# 4 can embed the relevant parts of the primary interop assembly (PIA) into the
calling code, so you no longer need to deploy the PIA separately.
In C# 3, the assignment to worksheet would fail without a cast, because the type
of the ActiveSheet property is represented as object. When using the embedded PIA feature, the type of ActiveSheet becomes dynamic, which leads to an
entirely different feature.
Additionally, C# 4 supports named indexers when working with COM—a feature not
demonstrated in this example.
I’ve already mentioned the final feature: dynamic typing in C# using the dynamic
type.
1.5.2
Interoperating with a dynamic language
Dynamic typing is such a big topic that the entirety of chapter 14 is dedicated to it. I’ll
just show you one small example of what it can do here.
Suppose your products aren’t stored in a database, or in XML, or in memory.
They’re accessible via a web service of sorts, but you only have Python code to access
it, and that code uses the dynamic nature of Python to build results without declaring
a type containing all the properties you need to access on each result. Instead, the
results let you ask for any property, and try to work out what you mean at execution
time. In a language like Python, there’s nothing unusual about that. But how can you
access your results from C#?
The answer comes in the form of dynamic—a new type4 that the C# compiler
allows you to use dynamically. If an expression is of type dynamic, you can call methods on it, access properties, pass it around as a method argument, and so on—and
most of the normal binding process happens at execution time instead of compile
time. You can implicitly convert a value from dynamic to any other type (which is why
the worksheet cast in listing 1.19 worked) and do all kinds of other fun stuff.
This behavior can also be useful even within pure C# code, with no interop
involved, but it’s fun to see it working with other languages. The following listing
shows how you can get the list of products from IronPython and print it out. This
includes all the setup code to run the Python code in the same process.
4
Sort of, anyway. It’s a type as far as the C# compiler is concerned, but the CLR doesn’t know anything about it.
22
CHAPTER 1
Listing 1.20
The changing face of C# development
Running IronPython and extracting properties dynamically (C# 4)
ScriptEngine engine = Python.CreateEngine();
ScriptScope scope = engine.ExecuteFile("FindProducts.py");
dynamic products = scope.GetVariable("products");
foreach (dynamic product in products)
{
Console.WriteLine("{0}: {1}", product.ProductName, product.Price);
}
Both products and product are declared to be dynamic, so the compiler is happy to
let you iterate over the list of products and print out the properties, even though it
doesn’t know whether it’ll work. If you make a typo, using product.Name instead of
product.ProductName, for example, that would only show up at execution time.
This is completely contrary to the rest of C#, which is statically typed. But dynamic
typing only comes into play when expressions with a type of dynamic are involved;
most C# code is likely to remain statically typed throughout.
1.6
Writing asynchronous code without the heartache
Finally you get to see C# 5’s big feature: asynchronous functions, which allow you to
pause code execution without blocking a thread.
This topic is big—really big—but I’ll give you just a snippet for now. As I’m sure
you’re aware, there are two golden rules when it comes to threading in Windows
Forms: you mustn’t block the UI thread, and you mustn’t access UI elements on any
other thread, except in a few well-specified ways. The following listing shows a single
method that handles a button click in a Windows Forms application and displays
information about a product, given its ID.
Listing 1.21
Displaying products in Windows Forms using an asynchronous function
private async void CheckProduct(object sender, EventArgs e)
{
try
{
productCheckButton.Enabled = false;
string id = idInput.Text;
Task<Product> productLookup = directory.LookupProductAsync(id);
Task<int> stockLookup = warehouse.LookupStockLevelAsync(id);
Product product = await productLookup;
if (product == null)
{
return;
}
nameValue.Text = product.Name;
priceValue.Text = product.Price.ToString("c");
int stock = await stockLookup;
stockValue.Text = stock.ToString();
}
Dissecting the .NET platform
23
finally
{
productCheckButton.Enabled = true;
}
}
The full method is a little longer than the one shown in listing 1.22, displaying status
messages and clearing the results at the start, but this listing contains all the important
parts. The new parts of syntax are in bold—the method has the new async modifier,
and there are two await expressions.
If you squint and ignore those for the moment, you can probably understand the
general flow of the code. It starts off performing lookups on both the product directory and warehouse to find out the product details and current stock. The method
then waits until it has the product information, and quits if the directory has no entry
for the given ID. Otherwise, it fills in the UI elements for the name and price, and
then waits to get the stock information, and displays that too.
Both the product and stock lookups are asynchronous—they could be database
operations or web service calls. It doesn’t matter—when you await the results, you’re
not actually blocking the UI thread, even though all the code in the method runs on
that thread. When the results come back, the method continues from where it left off.
The example also demonstrates that normal flow control (try/finally) operates
exactly as you’d expect it to. The really surprising thing about this method is that it
has managed to achieve exactly the kind of asynchrony you want without any of the
normal messing around starting other threads or BackgroundWorkers, calling Control
.BeginInvoke, or attaching callbacks to asynchronous events. Of course you still need
to think—asynchrony doesn’t become easy using async/await, but it becomes less
tedious, with far less boilerplate code to distract you from the inherent complexity
you’re trying to control.
Are you dizzy yet? Relax, I’ll slow down considerably for the rest of the book. In
particular, I’ll explain some of the corner cases, going into more detail about why various features were introduced, and giving some guidance as to when it’s appropriate to
use them.
So far I’ve been showing you features of C#. Some of these features require library
assistance, and some of them require runtime assistance. I’ll say this sort of thing a lot,
so let’s clear up what I mean.
1.7
Dissecting the .NET platform
When it was originally introduced, .NET was used as a catchall term for a vast range of
technologies coming from Microsoft. For instance, Windows Live ID was called .NET
Passport, despite there being no clear relationship between that and what you currently know as .NET. Fortunately, things have calmed down somewhat since then. In
this section, we’ll look at the various parts of .NET.
24
CHAPTER 1
The changing face of C# development
In several places in this book, I’ll refer to three different kinds of features: features
of C# as a language, features of the runtime that provides the “engine,” if you will, and
features of the .NET framework libraries. This book is heavily focused on the language of
C#, and I’ll generally only discuss runtime and framework features when they relate to
features of C# itself. Often features will overlap, but it’s important to understand
where the boundaries lie.
1.7.1
C#, the language
The language of C# is defined by its specification, which describes the format of C#
source code, including both syntax and behavior. It doesn’t describe the platform that
the compiler output will run on, beyond a few key points where the two interact. For
instance, the C# language requires a type called System.IDisposable, which contains
a method called Dispose. These are required in order to define the using statement.
Likewise, the platform needs to be able to support (in one form or another) both
value types and reference types, along with garbage collection.
In theory, any platform that supports the required features could have a C# compiler
targeting it. For example, a C# compiler could legitimately produce output in a form
other than the Intermediate Language (IL), which is the typical output at the time of this
writing. A runtime could interpret the output of a C# compiler, or convert it all to native
code in one step rather than JIT-compiling it. Though these options are relatively
uncommon, they do exist in the wild; for example, the Micro Framework uses an interpreter, as can Mono (). At the other end of the spectrum,
ahead-of-time compilation is used by NGen and by Xamarin.iOS (
.com/ios)—a platform for building applications for the iPhone and other iOS devices.
1.7.2
Runtime
The runtime aspect of the .NET platform is the relatively small amount of code that’s
responsible for making sure that programs written in IL execute according to the Common Language Infrastructure (CLI) specification (ECMA-335 and ISO/IEC 23271), partitions I to III. The runtime part of the CLI is called the Common Language Runtime (CLR).
When I refer to the CLR in the rest of the book, I mean Microsoft’s implementation.
Some elements of the C# language never appear at the runtime level, but others
cross the divide. For instance, enumerators aren’t defined at a runtime level, and neither is any particular meaning attached to the IDisposable interface, but arrays and
delegates are important to the runtime.
1.7.3
Framework libraries
Libraries provide code that’s available to your programs. The framework libraries in
.NET are largely built as IL themselves, with native code used only where necessary.
This is a mark of the strength of the runtime: your own code isn’t expected to be a
second-class citizen—it can provide the same kind of power and performance as the
Making your code super awesome
25
libraries it utilizes. The amount of code in the libraries is much greater than that of
the runtime, in the same way that there’s much more to a car than the engine.
The framework libraries are partially standardized. Partition IV of the CLI specification provides a number of different profiles (compact and kernel) and libraries. Partition IV comes in two parts—a general textual description of the libraries identifying,
among other things, which libraries are required within which profiles, and another
part containing the details of the libraries themselves in XML format. This is the same
form of documentation produced when you use XML comments within C#.
There’s much within .NET that’s not within the base libraries. If you write a program that only uses libraries from the specification, and uses them correctly, you
should find that your code works flawlessly on any implementation—Mono, .NET, or
anything else. But in practice, almost any program of any size will use libraries that
aren’t standardized—Windows Forms or ASP.NET, for instance. The Mono project has
its own libraries that aren’t part of .NET, such as GTK#, and it implements many of the
nonstandardized libraries.
The term .NET refers to the combination of the runtime and libraries provided by
Microsoft, and it also includes compilers for C# and VB.NET. It can be seen as a whole
development platform built on top of Windows. Each aspect of .NET is versioned separately, which can be a source of confusion. Appendix C gives a quick rundown of
which version of what came out when and with what features.
If that’s all clear, I have one last bit of housekeeping to go through before we really
start diving into C#.
1.8
Making your code super awesome
I apologize for the misleading heading. This section (in itself) will not make your code
super awesome. It won’t even make it refreshingly minty. It will help you make the
most of this book, though—and that’s why I wanted to make sure you actually read it.
There’s more of this sort of thing in the front matter (the bit before page 1), but I
know that many readers skip over that, heading straight for the meat of the book. I
can understand that, so I’ll make this as quick as possible.
1.8.1
Presenting full programs as snippets
One of the challenges when writing a book about a computer language (other than
scripting languages) is that complete programs—ones that the reader can compile
and run with no source code other than what’s presented—get long pretty quickly. I
wanted to get around this, to provide you with code that you could easily type in and
experiment with. I believe that actually trying something is a much better way of learning than just reading about it.
With the right assembly references and the right using directives, you can accomplish a lot with a fairly short amount of C# code, but the killer is the fluff involved in
writing those using directives, declaring a class, and declaring a Main method before
you’ve written the first line of useful code. My examples are mostly in the form of
26
CHAPTER 1
The changing face of C# development
snippets, which ignore the fluff that gets in the way of simple programs, concentrating on the important parts. The snippets can be run directly in a small tool I’ve
built, called Snippy.
If a snippet doesn’t contain an ellipsis (...), then all of the code should be considered to be the body of the Main method of a program. If there is an ellipsis, then
everything before it is treated as declarations of methods and nested types, and everything after the ellipsis goes in the Main method. For example, consider this snippet:
static string Reverse(string input)
{
char[] chars = input.ToCharArray();
Array.Reverse(chars);
return new string(chars);
}
...
Console.WriteLine(Reverse("dlrow olleH"));
This is expanded by Snippy into the following:
using System;
public class Snippet
{
static string Reverse(string input)
{
char[] chars = input.ToCharArray();
Array.Reverse(chars);
return new string(chars);
}
[STAThread]
static void Main()
{
Console.WriteLine(Reverse("dlrow olleH"));
}
}
In reality, Snippy includes far more using directives, but the expanded version was
already getting | https://pl.b-ok.org/book/2193394/a183f0 | CC-MAIN-2019-47 | refinedweb | 15,687 | 53.1 |
GLib provides a standard method for error propagation called
GError. In this appendix, you will find a complete list of the
GError domains, as of GTK+ 2.10, along with the error types that correspond to each domain.
The
GError structure provides three elements: the error domain, a message string, and an error code.
struct GError { GQuark domain; gchar *message; gint code; };
Each error domain represents a group of similar error types. Example error domains include
G_BOOKMARK_FILE_ERROR,
GDK_PIXBUF_ERROR, and
G_FILE_ERROR. They are always named as
<NAMESPACE>_<MODULE>_ERROR, where the namespace is the library containing the function, and the module is the widget or object type.
The
message is a human-readable string ...
No credit card required | https://www.oreilly.com/library/view/foundations-of-gtk/9781590597934/Appendix_E.html | CC-MAIN-2018-51 | refinedweb | 118 | 58.69 |
On Mon 11 Nov 2002 (13:52 +0100), ?ystein O Johansen wrote: > > Those I didn't actually try to use it as patch input, but I'm not surprised if it fails. I hit the same problem when I did the rollout patches - patch simply couldn't deal with a lot of interleaved changes. In that case I actually used #if 0 #endif around the original code and put the new code in afterwards. I'm almost sure that this is the same problem here. It's probably not worth worrying about yet. Joseph has mailed me that he'd prefer a single set of movefilters which apply to all evaluations, rather than a separate set. As he's the one doing the changes, I'll follow his lead. I've asked him to send back the changes he's making - which I assume is to use pointers rather than an array of filters in the evalcontexts, or maybe simply keeping the filters entirely separate. Then I'll add something to the GUI - probably under settings advanced - to set up the filters. But, if you want, I can send you a copy of eval.c with the changes in place. -- Jim Segrave address@hidden | https://lists.gnu.org/archive/html/bug-gnubg/2002-11/msg00077.html | CC-MAIN-2021-04 | refinedweb | 204 | 82.04 |
I have two VIEW_3D area types open. Is there a way to differentiate between the two in Python, or possibly tag them for that purpose? I want to make a button for the render menu of the properties panel which will change one of the VIEW_3D area types to “Rendered” viewport shading mode.
How can I tell the difference between two of the same area types?
Areas and Spaces don’t support ID properties.
But you can check certain properties, like width and height of an area maybe.
Regions have IDs:
Thanks. Well I’m working on an addon so I don’t suppose a method like this would be very stable if the user started moving the screen around. Is there a way I could target a 3D View area type based on it being in the perspective of a camera? Or is there perhaps a camera command command for “Rendered” viewport shading? Thanks again.
I would use a region’s as_pointer() instead of id. Region ids are unique, but they cannot be used to identify a specific region, as they are not guaranteed to remain the same. If you maximize and restore other areas, the region ids tend to change where the region’s pointer value does not.
Note: Do not use area pointers, only region pointers. I usually use the area’s ‘WINDOW’ region.
Thanks SynaGl0w. Could you maybe post a one or two liner demonstrating how you use region pointers? I don’t even know what those would be concatenated with.
Just call the region’s as_pointer() method. It returns the pointer as an int.
I don’t know how to do that. If you don’t want to take the time to write a line explaining what that looks like, that’s ok, just making sure.
Simple as:
unique_value = region.as_pointer()
region being assigned the ‘WINDOW’ region of the area you want to identify.
Oi, lol. You weren’t kidding, that’s very short and simple. I’m not sure how to first assign region as the 3D View area type to get its pointer value. I’m very unfamiliar with this realm of Python within Blender. Could you possibly write me or direct me to an example of this in action? If that’s getting too tedious that’s fine, I may just need to do some more research than I want too before I can process this. Thanks.
for area in bpy.context.screen.areas: if area.type == 'VIEW_3D': for region in area.regions: if region.type == 'WINDOW': print(region.as_pointer()) break
You might also try:
hash(region)
to get an unique identifier.
Sorry to reply so late to this thread. Thanks for all the responses. CoDEmanX the script works only the values for the pointers change everytime I open up Blender, Same with using hash(region) as mentioned by pink vertex. The values change each time so I don’t know how I could use either method to identify a specific 3D View when they change every time I open Blender.
Also, even when only running Blender once I can’t seem to use this code to target in a command on a specific window. Anyone know what’s wrong?
import bpy for area in bpy.context.screen.areas: if area.type == 'VIEW_3D': for region in area.regions: if region.type == 'WINDOW': print(region.as_pointer()) region = region.as_pointer() if region == 68624488: bpy.context.space_data.viewport_shade = 'RENDERED' break
This just gives me an error. Ideally I’d like to be able to do stuff to a specific 3D View window once I am able to specify it.
Of it changes. I really wonder why you try to treat an area like an individual object - it simply is not. There are plenty other properties you could use, like size, position etc. to find a suitable one.
I’m afraid your comment got cut off there, not sure what you were saying before “Of it changes”. Why do I want to treat an area like an individual object? I simply want to be able to nest buttons in the properties panel which can affect a specific 3D View window’s settings such as viewport shading.
Pointers, hashes, etc. are all runtime unique, thus work for identifying things during runtime.
If you’re looking for constant way between runs to identify areas, I guess you could try just using the index for the areas you want of the active screen. I think those remain constant between runs, though I am not sure.
Example:
for index, area in enumerate(C.screen.areas): print("{}: {}".format(index, area.type))
The real question then becomes: With multiple areas of the same type how do you know which one is which? Well you can just figure it out in the console and hard-code the index when you access different areas, or you can go by area type and size (largest/smallest, etc.).
For example this sets the smallest View3d to rendered mode:
import bpy v3d_list = [area for area in bpy.context.screen.areas if area.type == 'VIEW_3D'] if v3d_list: smallest_area = min(v3d_list, key=lambda area: area.width * area.height) smallest_area.spaces.active.viewport_shade = 'RENDERED'
SynaGl0w that’s perfect. This gives a lot potential control for testing. Thanks all!
I meant to say “Of course it changes.” (SynaGl0w explained why)
If you want to affect a certain Area via a button, you would usually add an operator to the T or N toolshelf of that area. Run your operation on bpy.context.area, which will refers to the area the toolshelf region belongs to.
Imagine one control area that allows you to manipulate multiple screens where all of the headers and menus are closed on each screen giving you a clean presentation. Or perhaps you create presets which can run multiple settings on different screens so you can see the difference immediately. Targeting specific view screens allows you to do this. Awesomely, you guys helped me figure out how to do it today. | https://blenderartists.org/t/how-can-i-tell-the-difference-between-two-of-the-same-area-types/616073 | CC-MAIN-2019-30 | refinedweb | 1,002 | 75.91 |
On Mon, May 23, 2011 at 16:16, Warren Togami Jr. <wtogami gmail com> wrote: > Hi folks, > > > I'm currently working on putting LTSP into EPEL6 as it would be a compelling > long-term supported platform. Currently it seems that I will be able to put > all of the LTSP stack into EPEL6 with the exception of one package. Ok looking over the basics: 1) It does not replace any RH package via RPM/repo namespace 2) It does not overwrite any files owned by an RH package 3) It will have instructions on how to let others update it if needed. I don't see anything objectionable myself but want input from Dennis Gilmore and others (who are on travel to FUDcon panama). > I will need to build an alternative "ltsp-client-kernel" package which > contains a minimally stripped down kernel only for LTSP clients. It must be > small for embedded devices, and built with different options because the > standard EL6 kernel disabled some stuff we need like nbd.ko. The thin > clients would work great with this kernel combined with standard EL6 + EPEL6 > userspace packages. > > Maintenance would be reasonable because its source must only be kept in sync > with the EL6 kernel source. The ltsp-client-kernel package would be made in > such a way that it will not be pulled in by deps of non-LTSP packages, so > other users will not notice its existence at all. Furthermore, it will not > auto-build a /boot/initramfs-* image and add itself to grub.conf like a > standard kernel, because that is not how thin client embedded kernels are > handled with LTSP. > > Any objections? I will not be able to ship LTSP in EPEL without this > package. > > Warren Togami > warren togami com > > _______________________________________________ > | https://listman.redhat.com/archives/epel-devel-list/2011-May/msg00059.html | CC-MAIN-2021-21 | refinedweb | 294 | 68.1 |
Contstuff
From HaskellWiki
1 Introduction.
2 Basics
2.1 ContT
The ContT monad transformer is the simplest of all CPS-based monads.
2.2 Abortion
Let's have a look at a small example:
testComp1 :: ContT () IO () testComp1 = forever $ do txt <- io getLine case txt of "info" -> io $ putStrLn "This is a test computation." "quit" -> abort () _ -> return ()
2.3 Resumption and branchesYou can capture the current continuation using the common
labelCC :: a -> ContT r m (a, Label (ContT r m) a) goto :: Label (ContT r m) a -> a -> ContT r m.
2.4 Lifting
As noted earlier there are three lifting functions, which you can use to access monads in lower layers of the transformer stack:
lift :: (Transformer t, Monad m) => m a -> t m a base :: (LiftBase m a) => Base m a -> m a io :: (Base m a ~ IO a, LiftBase m a) => Base m a -> m a) | http://www.haskell.org/haskellwiki/index.php?title=Contstuff&oldid=36894 | CC-MAIN-2014-35 | refinedweb | 150 | 59.03 |
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.
?Jamie Zawinski, <alt.religion.emacs> (08/12/1997)
Many readers will have some background with regular expressions, but some will not have any. Those with experience using regular expressions in other languages (or in Python) can probably skip this tutorial section. But readers new to regular expressions (affectionately called regexes by users) should read this section; even some with experience can benefit from a refresher.
A regular expression is a compact way of describing complex patterns in texts. You can use them to search for patterns and, once found, to modify the patterns in complex ways. They can also be used to launch programmatic actions that depend on patterns.
Jamie Zawinski's tongue-in-cheek comment in the epigram is worth thinking about. Regular expressions are amazingly powerful and deeply expressive. That is the very reason that writing them is just as error-prone as writing any other complex programming code. It is always better to solve a genuinely simple problem in a simple way; when you go beyond simple, think about regular expressions.
A large number of tools other than Python incorporate regular expressions as part of their functionality. Unix-oriented command-line tools like grep, sed, and awk are mostly wrappers for regular expression processing. Many text editors allow search and/or replacement based on regular expressions. Many programming languages, especially other scripting languages such as Perl and TCL, build regular expressions into the heart of the language. Even most command-line shells, such as Bash or the Windows-console, allow restricted regular expressions as part of their command syntax.
There are some variations in regular expression syntax between different tools that use them, but for the most part regular expressions are a "little language" that gets embedded inside bigger languages like Python. The examples in this tutorial section (and the documentation in the rest of the chapter) will focus on Python syntax, but most of this chapter transfers easily to working with other programming languages and tools.
As with most of this book, examples will be illustrated by use of Python interactive shell sessions that readers can type themselves, so that they can play with variations on the examples. However, the re module has little reason to include a function that simply illustrates matches in the shell. Therefore, the availability of the small wrapper program below is implied in the examples:
import re def re_show(pat, s): print re.compile(pat, re.M).sub("{\g<0>}", s.rstrip()),'\n' s = '''Mary had a little lamb And everywhere that Mary went, the lamb was sure to go'''
Place the code in an external module and import it. Those new to regular expressions need not worry about what the above function does for now. It is enough to know that the first argument to re_show() will be a regular expression pattern, and the second argument will be a string to be matched against. The matches will treat each line of the string as a separate pattern for purposes of matching beginnings and ends of lines. The illustrated matches will be whatever is contained between curly braces.
The very simplest pattern matched by a regular expression is a literal character or a sequence of literal characters. Anything in the target text that consists of exactly those characters in exactly the order listed will match. A lowercase character is not identical with its uppercase version, and vice versa. A space in a regular expression, by the way, matches a literal space in the target (this is unlike most programming languages or command-line tools, where a variable number of spaces separate keywords).
>>> from re_show import re_show, s >>> re_show('a', s) M{a}ry h{a}d {a} little l{a}mb. And everywhere th{a}t M{a}ry went, the l{a}mb w{a}s sure to go. >>> re_show('Mary', s) {Mary} had a little lamb. And everywhere that {Mary} went, the lamb was sure to go.
A number of characters have special meanings to regular expressions. A symbol with a special meaning can be matched, but to do so it must be prefixed with the backslash character (this includes the backslash character itself: To match one backslash in the target, the regular expression should include \\). In Python, a special way of quoting a string is available that will not perform string interpolation. Since regular expressions use many of the same backslash-prefixed codes as do Python strings, it is usually easier to compose regular expression strings by quoting them as "raw strings" with an initial "r".
>>> from re_show import re_show >>>>> re_show(r'.*', s) {Special characters must be escaped.*} >>> re_show(r'\.\*', s) Special characters must be escaped{.*} >>> re_show('\\\\', r'Python \ escaped \ pattern') Python {\} escaped {\} pattern >>> re_show(r'\\', r'Regex \ escaped \ pattern') Regex {\} escaped {\} pattern
Two special characters are used to mark the beginning and end of a line: caret ("^") and dollar sign ("$"). To match a caret or dollar sign as a literal character, it must be escaped (i.e., precede it by a backslash "\").
An interesting thing about the caret and dollar sign is that they match zero-width patterns. That is, the length of the string matched by a caret or dollar sign by itself is zero (but the rest of the regular expression can still depend on the zero-width match). Many regular expression tools provide another zero-width pattern for word-boundary ("\b"). Words might be divided by whitespace like spaces, tabs, newlines, or other characters like nulls; the word-boundary pattern matches the actual point where a word starts or ends, not the particular whitespace characters.
>>> from re_show import re_show, s >>> re_show(r'^Mary', s) {Mary} had a little lamb And everywhere that Mary went, the lamb was sure to go >>> re_show(r'Mary$', s) Mary had a little lamb And everywhere that {Mary} went, the lamb was sure to go >>> re_show(r'$','Mary had a little lamb') Mary had a little lamb{}
In regular expressions, a period can stand for any character. Normally, the newline character is not included, but optional switches can force inclusion of the newline character also (see of re module functions). Using a period in a pattern is a way of requiring that "something" occurs here, without having to decide what.
Readers who are familiar with DOS command-line wildcards will know the question mark as filling the role of "some character" in command masks. But in regular expressions, the question mark has a different meaning, and the period is used as a wildcard.
>>> from re_show import re_show, s >>> re_show(r'.a', s) {Ma}ry {ha}d{ a} little {la}mb And everywhere t{ha}t {Ma}ry went, the {la}mb {wa}s sure to go
A regular expression can have literal characters in it and also zero-width positional patterns. Each literal character or positional pattern is an atom in a regular expression. One may also group several atoms together into a small regular expression that is part of a larger regular expression. One might be inclined to call such a grouping a "molecule," but normally it is also called an atom.
In older Unix-oriented tools like grep, subexpressions must be grouped with escaped parentheses; for example, \ (Mary\). In Python (as with most more recent tools), grouping is done with bare parentheses, but matching a literal parenthesis requires escaping it in the pattern.
>>> from re_show import re_show, s >>> re_show(r'(Mary)( )(had)', s) {Mary had} a little lamb And everywhere that Mary went, the lamb was sure to go >>> re_show(r'\(.*\)', 'spam (and eggs)') spam {(and eggs)}
Rather than name only a single character, a pattern in a regular expression can match any of a set of characters.
A set of characters can be given as a simple list inside square brackets; for example, [aeiou] will match any single lowercase vowel. For letter or number ranges it may also have the first and last letter of a range, with a dash in the middle; for example, [A-Ma-m] will match any lowercase or uppercase letter in the first half of the alphabet.
Python (as with many tools) provides escape-style shortcuts to the most commonly used character class, such as \s for a whitespace character and \d for a digit. One could always define these character classes with square brackets, but the shortcuts can make regular expressions more compact and more readable.
>>> from re_show import re_show, s >>> re_show(r'[a-z]a', s) Mary {ha}d a little {la}mb And everywhere t{ha}t Mary went, the {la}mb {wa}s sure to go
The caret symbol can actually have two different meanings in regular expressions. Most of the time, it means to match the zero-length pattern for line beginnings. But if it is used at the beginning of a character class, it reverses the meaning of the character class. Everything not included in the listed character set is matched.
>>> from re_show import re_show, s >>> re_show(r'[^a-z]a', s) {Ma}ry had{ a} little lamb And everywhere that {Ma}ry went, the lamb was sure to go
Using character classes is a way of indicating that either one thing or another thing can occur in a particular spot. But what if you want to specify that either of two whole subexpressions occur in a position in the regular expression? For that, you use the alternation operator, the vertical bar ("|"). This is the symbol that is also used to indicate a pipe in Unix/DOS shells and is sometimes called the pipe character.
The pipe character in a regular expression indicates an alternation between everything in the group enclosing it. What this means is that even if there are several groups to the left and right of a pipe character, the alternation greedily asks for everything on both sides. To select the scope of the alternation, you must define a group that encompasses the patterns that may match. The example illustrates this:
>>> from re_show import re_show >>>>> re_show(r'cat|dog|bird', s2) The pet store sold {cat}s, {dog}s, and common things you can do with regular expressions is to specify how many times an atom occurs in a complete regular expression. Sometimes you want to specify something about the occurrence of a single character, but very often you are interested in specifying the occurrence of a character class or a grouped subexpression.
There is only one quantifier included with "basic" regular expression syntax, the asterisk ("*"); in English this has the meaning "some or none" or "zero or more." If you want to specify that any number of an atom may occur as part of a pattern, follow the atom by an asterisk.
Without quantifiers, grouping expressions doesn't really serve as much purpose, but once we can add a quantifier to a subexpression we can say something about the occurrence of the subexpression as a whole. Take a look at the example:
>>> from re_show import re_show >>>>> re_show(r'@(=!=)*@', s) Match with zero in the middle: {@@} Subexpression occurs, but...: @=!=ABC@ Lots of occurrences: {@=!==!==!==!==!=@} Must repeat entire pattern: @=!==!=!==!=@
In a certain way, the lack of any quantifier symbol after an atom quantifies the atom anyway: It says the atom occurs exactly once. Extended regular expressions add a few other useful numbers to "once exactly" and "zero or more times." The plus sign ("+") means "one or more times" and the question mark ("?") means "zero or one times." These quantifiers are by far the most common enumerations you wind up using.
If you think about it, you can see that the extended regular expressions do not actually let you "say" anything the basic ones do not. They just let you say it in a shorter and more readable way. For example, (ABC)+ is equivalent to (ABC)(ABC)*, and X(ABC)?Y is equivalent to XABCY|XY. If the atoms being quantified are themselves complicated grouped subexpressions, the question mark and plus sign can make things a regular expressions, you can specify arbitrary pattern occurrence counts using a more verbose syntax than the question mark, plus sign, and asterisk quantifiers. The curly braces ("{" and "}") can surround a precise count of how many occurrences you are looking for.
The most general form of the curly-brace quantification uses two range arguments (the first must be no larger than the second, and both must be non-negative integers). The occurrence count is specified this way to fall between the minimum and maximum indicated (inclusive). As shorthand, either argument may be left empty: If so, the minimum/maximum is specified as zero/infinity, respectively. If only one argument is used (with no comma in there), exactly that number of occurrences are matched.
>>> from re_show import re_show >>>>> re_show(r'a{5} b{,6} c{4,8}', s2) {aaaaa bbbbb ccccc} aaa bbb ccc aaaaa bbbbbbbbbbbbbb ccccc >>> re_show(r'a+ b{3,} c?', s2) {aaaaa bbbbb c}cccc {aaa bbb c}cc {aaaaa bbbbbbbbbbbbbb c}cccc >>> re_show(r'a{5} b{6,} c{4,8}', s2) aaaaa bbbbb ccccc aaa bbb ccc {aaaaa bbbbbbbbbbbbbb ccccc}
One powerful option in creating search patterns is specifying that a subexpression that was matched earlier in a regular expression is matched again later in the expression. We do this using backreferences. Backreferences are named by the numbers 1 through 99, preceded by the backslash/escape character when used in this manner. These backreferences refer to each successive group in the match pattern, as in (one) (two) (three) \1\2\3. Each numbered backreference refers to the group that, in this example, has the word corresponding to the number.
It is important to note something the example illustrates. What gets matched by a backreference is the same literal string matched the first time, even if the pattern that matched the string could have matched other strings. Simply repeating the same grouped subexpression later in the regular expression does not match the same targets as using a backreference (but you have to decide what it is you actually want to match in either case).
Backreferences refer back to whatever occurred in the previous grouped expressions, in the order those grouped expressions occurred. Up to 99 numbered backreferences may be used. However, Python also allows naming backreferences, which can make it much clearer what the backreferences are pointing to. The initial pattern group must begin with ?P<name>, and the corresponding backreference must contain (?P=name).
>>> from re_show import re_show >>>>> re_show(r'(abc|xyz) \1', s2) jkl abc xyz jkl xyz abc jkl {abc abc} jkl {xyz xyz} >>> re_show(r'(abc|xyz) (abc|xyz)', s2) jkl {abc xyz} jkl {xyz abc} jkl {abc abc} jkl {xyz xyz} >>> re_show(r'(?P<let3>abc|xyz) (?P=let3)', s2) jkl abc xyz jkl xyz abc jkl {abc abc} jkl {xyz xyz}
Quantifiers in regular expressions are greedy. That is, they match as much as they possibly can.
Probably the easiest mistake to make in composing regular expressions is to match too much. When you use a quantifier, you want it to match everything (of the right sort) up to the point where you want to finish your match. But when using the *, +, or numeric quantifiers, it is easy to forget that the last bit you are looking for might occur later in a line than the one you are interested in.
>>>
Often if you find that regular expressions are matching too much, a useful procedure is to reformulate the problem in your mind. Rather than thinking about, "What am I trying to match later in the expression?" ask yourself, "What do I need to avoid matching in the next part?" This often leads to more parsimonious pattern matches. Often the way to avoid a pattern is to use the complement operator and a character class. Look at the example, and think about how it works.
The trick here is that there are two different ways of formulating almost the same sequence. Either you can think you want to keep matching until you get to XYZ, or you can think you want to keep matching unless you get to XYZ. These are subtly different.
For people who have thought about basic probability, the same pattern occurs. The chance of rolling a 6 on a die in one roll is
. What is the chance of rolling a 6 in six rolls? A naive calculation puts the odds at
+
+
+
+
+
, or 100 percent. This is wrong, of course (after all, the chance after twelve rolls isn't 200 percent). The correct calculation is, "How do I avoid rolling a 6 for six rolls?" (i.e.,
x
x
x
x
x
, or about 33 percent). The chance of getting a 6 is the same chance as not avoiding it (or about 66 percent). In fact, if you imagine transcribing a series of die rolls, you could apply a regular expression to the written record, and similar thinking applies.
>>>
Not all tools that use regular expressions allow you to modify target strings. Some simply locate the matched pattern; the mostly widely used regular expression tool is probably grep, which is a tool for searching only. Text editors, for example, may or may not allow replacement in their regular expression search facility.
Python, being a general programming language, allows sophisticated replacement patterns to accompany matches. Since Python strings are immutable, re functions do not modify string objects in place, but instead return the modified versions. But as with functions in the string module, one can always rebind a particular variable to the new string object that results from re modification.
Replacement examples in this tutorial will call a function re_new() that is a wrapper for the module function re.sub (). Original strings will be defined above the call, and the modified results will appear below the call and with the same style of additional markup of changed areas as re_show() used. Be careful to notice that the curly braces in the results displayed will not be returned by standard re functions, but are only added here for emphasis. Simply import the following function in the examples below:
import re def re_new(pat, rep, s): print re.sub(pat, '{'+rep+'}', s)
Let us take a look at a couple of modification examples that build on what we have already covered. This one simply substitutes some literal text for some other literal text. Notice that string.replace() can achieve the same result and will be faster in doing so.
>>> from re_new import re_new >>>>> re_new('cat','dog',s) The zoo had wild dogs, bob{dog}s, lions, and other wild {dog}s.
Most of the time, if you are using regular expressions to modify a target text, you will want to match more general patterns than just literal strings. Whatever is matched is what gets replaced (even if it is several different strings in the target):
>>> from re_new import re_new >>>>> re_new('cat|dog','snake',s) The zoo had wild {snake}s, bob{snake}s, lions, and other wild {snake}s. >>> re_new(r'[a-z]+i[a-z]*','nice',s) The zoo had {nice} dogs, bobcats, {nice}, and other {nice} cats.
It is nice to be able to insert a fixed string everywhere a pattern occurs in a target text. But frankly, doing that is not very context sensitive. A lot of times, we do not want just to insert fixed strings, but rather to insert something that bears much more relation to the matched patterns. Fortunately, backreferences come to our rescue here. One can use backreferences in the pattern matches themselves, but it is even more useful to be able to use them in replacement patterns. By using replacement backreferences, one can pick and choose from the matched patterns to use just the parts of interest.
As well as backreferencing, the examples below illustrate the importance of whitespace in regular expressions. In most programming code, whitespace is merely aesthetic. But the examples differ solely in an extra space within the arguments to the second call?and the return value is importantly different.
>>> from re_new import re_new >>>>> re_new(r'([A-Z])([0-9]{2,4})',r'\2:\1',s) {37:A} B4 {107:C} {5411:D}2 {1103:E} XXX >>> re_new(r'([A-Z])([0-9]{2,4}) ',r'\2:\1 ',s) {37:A }B4 {107:C }D54112 {1103:E }XXX
This tutorial has already warned about the danger of matching too much with regular expression patterns. But the danger is so much more serious when one does modifications, that it is worth repeating. If you replace a pattern that matches a larger string than you thought of when you composed the pattern, you have potentially deleted some important data from your target.
It is always a good idea to try out regular expressions on diverse target data that is representative of production usage. Make sure you are matching what you think you are matching. A stray quantifier or wildcard can make a surprisingly wide variety of texts match what you thought was a specific pattern. And sometimes you just have to stare at your pattern for a while, or find another set of eyes, to figure out what is really going on even after you see what matches. Familiarity might breed contempt, but it also instills competence.
Some very useful enhancements to basic regular expressions are included with Python (and with many other tools). Many of these do not strictly increase the power of Python's regular expressions, but they do manage to make expressing them far more concise and clear.
Earlier in the tutorial, the problems of matching too much were discussed, and some workarounds were suggested. Python is nice enough to make this easier by providing optional "non-greedy" quantifiers. These quantifiers grab as little as possible while still matching whatever comes next in the pattern (instead of as much as possible).
Non-greedy quantifiers have the same syntax as regular greedy ones, except with the quantifier followed by a question mark. For example, a non-greedy pattern might look like: A[A-Z] *?B. In English, this means "match an A, followed by only as many capital letters as are needed to find a B."
One little thing to look out for is the fact that the pattern [A-Z]*?. will always match zero capital letters. No longer matches are ever needed to find the following "any character" pattern. If you use non-greedy quantifiers, watch out for matching too little, which is a symmetric danger.
>>> from re_show import re_show >>>>> re_show(r'th.*s',s) -- I want to match {the words that s}tart -- wi{th 'th' and end with 's}'. {this line matches jus}t right {this # thus # this}tle >>> re_show(r'th.*?s',s) -- I want to match {the words} {that s}tart -- wi{th 'th' and end with 's}'. {this} line matches just right {this} # {thus} # {this}tle >>> re_show(r'th.*?s ',s) -- I want to match {the words }that start -- with 'th' and end with 's'. {this }line matches just right {this }# {thus }# thistle
Modifiers can be used in regular expressions or as arguments to many of the functions in re. A modifier affects, in one way or another, the interpretation of a regular expression pattern. A modifier, unlike an atom, is global to the particular match?in itself, a modifier doesn't match anything, it instead constrains or directs what the atoms match.
When used directly within a regular expression pattern, one or more modifiers begin the whole pattern, as in (?Limsux). For example, to match the word cat without regard to the case of the letters, one could use (?i)cat. The same modifiers may be passed in as the last argument as bitmasks (i.e., with a | between each modifier), but only to some functions in the re module, not to all. For example, the two calls below are equivalent:
>>> import re >>> re.search(r'(?Li)cat','The Cat in the Hat').start() 4 >>> re.search(r'cat','The Cat in the Hat',re.L|re.I).start() 4
However, some function calls in re have no argument for modifiers. In such cases, you should either use the modifier prefix pseudo-group or precompile the regular expression rather than use it in string form. For example:
>>> import re >>> re.split(r'(?i)th','Brillig and The Slithy Toves') ['Brillig and ', 'e Sli', 'y Toves'] >>> re.split(re.compile('th',re.I),'Brillig and the Slithy Toves') ['Brillig and ', 'e Sli', 'y Toves']
See the re module documentation for details on which functions take which arguments.
The modifiers listed below are used in re expressions. Users of other regular expression tools may be accustomed to a g option for "global" matching. These other tools take a line of text as their default unit, and "global" means to match multiple lines. Python takes the actual passed string as its unit, so "global" is simply the default. To operate on a single line, either the regular expressions have to be tailored to look for appropriate begin-line and end-line characters, or the strings being operated on should be split first using string.split() or other means.
* L (re.L) - Locale customization of \w, \W, \b, \B * i (re.I) - Case-insensitive match * m (re.M) - Treat string as multiple lines * s (re.S) - Treat string as single line * u (re.U) - Unicode customization of \w, \W, \b, \B * x (re.X) - Enable verbose regular expressions
The single-line option ("s") allows the wildcard to match a newline character (it won't otherwise). The multiple-line option ("m") causes "^" and "$" to match the beginning and end of each line in the target, not just the begin/end of the target as a whole (the default). The insensitive option ("i") ignores differences between the case of letters. The Locale and Unicode options ("L" and "u") give different interpretations to the word-boundary ("\b") and alphanumeric ("\w") escaped patterns?and their inverse forms ("\B" and "\W").
The verbose option ("x") is somewhat different from the others. Verbose regular expressions may contain nonsignificant whitespace and inline comments. In a sense, this is also just a different interpretation of regular expression patterns, but it allows you to produce far more easily readable complex patterns. Some examples follow in the sections below.
Let's take a look first at how case-insensitive and single-line options change the match behavior.
>>> from re_show import re_show >>>>> re_show(r'M.*[ise] ', s) {MAINE # Massachusetts }# Colorado # mississippi # {Missouri }# Minnesota # >>> re_show(r'(?i)M.*[ise] ', s) {MAINE # Massachusetts }# Colorado # {mississippi # Missouri }# Minnesota # >>> re_show(r'(?si)M.*[ise] ', s) {MAINE # Massachusetts # Colorado # mississippi # Missouri }# Minnesota #
Looking back to the definition of re_show(), we can see it was defined to explicitly use the multiline option. So patterns displayed with re_show() will always be multiline. Let us look at a couple of examples that use re.findall() instead.
>>> from re_show import re_show >>>>> re_show(r'(?im)^M.*[ise] ', s) {MAINE # Massachusetts }# Colorado # {mississippi # Missouri }# Minnesota # >>> import re >>> re.findall(r'(?i)^M.*[ise] ', s) ['MAINE # Massachusetts '] >>> re.findall(r'(?im)^M.*[ise] ', s) ['MAINE # Massachusetts ', 'mississippi # Missouri ']
Matching word characters and word boundaries depends on exactly what gets counted as being alphanumeric. Character codepages for letters outside the (US-English) ASCII range differ among national alphabets. Python versions are configured to a particular locale, and regular expressions can optionally use the current one to match words.
Of greater long-term significance is the re module's ability (after Python 2.0) to look at the Unicode categories of characters, and decide whether a character is alphabetic based on that category. Locale settings work OK for European diacritics, but for non-Roman sets, Unicode is clearer and less error prone. The "u" modifier controls whether Unicode alphabetic characters are recognized or merely ASCII ones:
>>> import re >>> alef, omega = unichr(1488), unichr(969) >>> u = alef +' A b C d '+omega+' X y Z' >>> u, len(u.split()), len(u) (u'\u05d0 A b C d \u03c9 X y Z', 9, 17) >>> ':'.join(re.findall(ur'\b\w\b', u)) u'A:b:C:d:X:y:Z' >>> ':'.join(re.findall(ur'(?u)\b\w\b', u)) u'\u05d0:A:b:C:d:\u03c9:X:y:Z'
Backreferencing in replacement patterns is very powerful, but it is easy to use many groups in a complex regular expression, which can be confusing to identify. It is often more legible to refer to the parts of a replacement pattern in sequential order. To handle this issue, Python's re patterns allow "grouping without backreferencing."
A group that should not also be treated as a backreference has a question mark colon at the beginning of the group, as in (?:pattern). In fact, you can use this syntax even when your backreferences are in the search pattern itself:
>>> from re_new import re_new >>>>> re_new(r'([A-Z])(?:-[a-z]{3}-)([0-9]*)', r'\1\2', s) {A37} # B:abcd:142 # {C66} # {D93} >>> # Groups that are not of interest excluded from backref ... >>> re_new(r'([A-Z])(-[a-z]{3}-)([0-9]*)', r'\1\2', s) {A-xyz-} # B:abcd:142 # {C-wxy-} # {D-qrs-} >>> # One could lose track of groups in a complex pattern ...
Python offers a particularly handy syntax for really complex pattern backreferences. Rather than just play with the numbering of matched groups, you can give them a name. Above we pointed out the syntax for named backreferences in the pattern space; for example, (?P=name). However, a bit different syntax is necessary in replacement patterns. For that, we use the \g operator along with angle brackets and a name. For example:
>>> from re_new import re_new >>>>> re_new(r'(?P<prefix>[A-Z])(-[a-z]{3}-)(?P<id>[0-9]*)', ... r'\g<prefix>\g<id>',s) {A37} # B:abcd:142 # {C66} # D93} (more general) subexpression actually grab it (usually for purposes of backreferencing that other subexpression).
There are two kinds of lookahead assertions: positive and negative. As you would expect, a positive assertion specifies that something does come next, and a negative one specifies that something does not come next. Emphasizing their connection with non-backreferenced groups, the syntax for lookahead assertions is similar: (?=pattern) for positive assertions, and (?!pattern) for negative assertions.
>>> from re_new import re_new >>>>> # Assert that three lowercase letters occur after CAP-DASH ... >>> re_new(r'([A-Z]-)(?=[a-z]{3})([\w\d]*)', r'\2\1', s) {xyz37A-} # B-ab6142 # C-Wxy66 # {qrs93D-} >>> # Assert three lowercase letts do NOT occur after CAP-DASH ... >>> re_new(r'([A-Z]-)(?![a-z]{3})([\w\d]*)', r'\2\1', s) A-xyz37 # {ab6142B-} # {Wxy66C-} # D-qrs93
Along with lookahead assertions, Python 2.0+ adds "lookbehind assertions." The idea is similar?a pattern is of interest only if it is (or is not) preceded by some other pattern. Lookbehind assertions are somewhat more restricted than lookahead assertions because they may only look backwards by a fixed number of character positions. In other words, no general quantifiers are allowed in lookbehind assertions. Still, some patterns are most easily expressed using lookbehind assertions.
As with lookahead assertions, lookbehind assertions come in a negative and a positive flavor. The former assures that a certain pattern does not precede the match, the latter assures that the pattern does precede the match.
>>> from re_new import re_new >>> re_show('Man', 'Manhandled by The Man') {Man}handled by The {Man} >>> re_show('(?<=The )Man', 'Manhandled by The Man') Manhandled by The {Man} >>> re_show('(?<!The )Man', 'Manhandled by The Man') {Man}handled by The Man
In the later examples we have started to see just how complicated regular expressions can get. These examples are not the half of it. It is possible to do some almost absurdly difficult-to-understand things with regular expression (but ones that are nonetheless useful).
There are two basic facilities that Python's "verbose" modifier ("x") uses in clarifying expressions. One is allowing regular expressions to continue over multiple lines (by ignoring whitespace like trailing spaces and newlines). The second is allowing comments within regular expressions. When patterns get complicated, do both!
The example given is a fairly typical example of a complicated, but well-structured and well-commented, regular expression:
>>> from re_show import re_show >>>>> pat = r''' (?x)( # verbose identify URLs within text ... (http|ftp|gopher) # make sure we find a resource type ... :// # ...needs to be followed by colon-slash-slash ... [^ \n\r]+ # some stuff then space, newline, tab is URL ... \w # URL always ends in alphanumeric char ... (?=[\s\.,]) # assert: followed by whitespace/period/comma ... ) # end of match group''' >>> re_show(pat, s) The URL for my site is: {}. You might also enjoy {} for a good place to download files. | http://etutorials.org/Programming/Python.+Text+processing/Chapter+3.+Regular+Expressions/3.1+A+Regular+Expression+Tutorial/ | CC-MAIN-2018-09 | refinedweb | 5,445 | 62.98 |
>
Other
Could anyone help change my script?
I have an object "Ball".
My object is a ball and it bounces up.
Every time when I press the left mouse button, "Ball" jumps up.
Each up move adds points.
Now my question.
When the object falls, the points decrease, but the speed of free fall continues to grow.
This causes points to not keep up with the falling ball.
The result is that when the ball stop to the ground, instead of having 0 points it has, for example, 66.
Can you determine the speed of falling?
How do that?
For now, I just add to my object:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class Score : MonoBehaviour {
public Text scoreText;
[SerializeField]
int score = 0;
[SerializeField]
Vector3 _lastPosition;
// Use this for initialization
void Start () {
_lastPosition = this.transform.position;
}
// Update is called once per frame
void Update () {
if( this.transform.position.y > _lastPosition.y )
score++;
else{
score--;
}
_lastPosition = this.transform.position;
scoreText.text = "Score" + score;
}
}
Answer by anthot4
·
May 09, 2018 at 02:24 PM
You could check the velocity of the rigidbody as soon as the ball jumps. Have two if statements one to check for positive velocity and if it is add to the score and if it is negative subtract from the score. Have this inside your if statement which check for left mouse button down on line 20.
If I can be honest, I will know how to do it with the condition of adding points, but with subtraction I will not know how to do it.
Something like this:
Vector3 LastPosition = Vector3.zero;
Void FixedUpdate(){
speed = (transform.position - LastPosition).magnitute; LastPosition = transform.position;
if (Input.GetMouseButtonDown(0){
if (speed >=0){ Score++; }
if (speed < 0){ Score--; } }
}
Answer by polan31
·
May 09, 2018 at 02:48 PM
Can not somehow be addicted to the height Y?
For example, as the height grows and the points grow, as the height fall and the points fall.
You could do that it would be something like this:
score += transform.y; // this would add the transform point on y to your score.
If your doing it that way it would be best to put the code above in a coroutine then add it to the players score every second or so.
Ok, I found something like this on unity servers and works in my 2d application.
However, I still can not transform it in such a way that the points are going down.
Answer by Guy_Yome
·
May 10, 2018 at 07:55 PM
Okay I have a solution. I assume that your ball will be jumping on platforms that are maybe generated on the "fly". So you'll have to make a list of platforms where you will add all the new platforms you create at the end of the list. You could use the collision detection built in Unity to help you detect what you are doing. Whenever you land on a new and higher platform, you will have to remove the first element in the list (which is the platform below). Doing so, the only platforms in the list will be the platform you are on and the platforms that are above you (Shown in blue).
Whenever you fall and collide with a platform that is not in the list (Shown in red), it will mean that it was a platform below and you will make the player lose points. Whenever you land on the first element of the list, you landed on your own platform and gained nothing. And for the rest (above), you will gain points. One other case is if you fall in the void (not on a platform), you could use a height that is an arbitrary distance lower than the actual platform (red line in the drawing) to see if the ball is lower than that. If it is, lose points and respawn the ball on the last platform. I hope this helps.
The blue platforms shown are the ones still in the ist. The red ones are not in the list anymore. The arrows say what actions you did to reach this position. down = fall, up = jump The red line is the point of falling in the void. The green circle is your.
JavaScript: Sound & Points deduct On click Help!
0
Answers
Problem with reducing points
1
Answer
Adding and Substracting ammunition
1
Answer
weird result when substracting/adding;
1
Answer
Using RayCast to Get Mouse Input
1
Answer | https://answers.unity.com/questions/1504055/how-to-add-and-subtract-points.html | CC-MAIN-2019-26 | refinedweb | 750 | 82.44 |
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
You can subscribe to this list here.
Showing
3
results of 3
Hi.
>
> How do I run a piece of frozen python script within an embedded jython
> interpreter?
First jythonc and the interpeter are not incompatible but do not
collabarate much. See:
In your case is not a problem because you want construct
your Python/Java object on Java side.
> i.e. How do I get the following contrived example to work?
>
> Foo.py:
> class Foo:
> def __init__(self, blah):
> self.blah = blah
> def doSomething():
> blah.doSomethingElse()
>
> Foo.py is precompiled to Foo.class using jythonc,
This produces a java class that correspond to a Python module
not to a class, the java class has just basically a main method.
If you want a Java (proxy) class corresponding to a python class,
your python class should inherit from Java:
import java.
>
> PyObject f = new Foo(); // how do I call __init__ with arguments?
Now something like this
Foo f = new Foo("blah");
should work.
Notice that java side Foo class does not inherit form PyObject
in this case.
>
> PythonInterpreter interp = new PythonInterpreter();
> interp.set("foo", f);
> interp.exec("foo.doSomething()");
This works because the java/python conversion behind the scene
accesssz the full-fledged python instance, for which the java class
Foo is just a proxy :) .
>
> Alternatively, how would I bytecompile a script, and pass the code object
> to the interpreter? I tried to use jythonc to create a class file, and
> then used that as a source for an InputStream to
> PythonInterpreter.exexfile(),
This doesn't work.
> but that just gave me syntax errors on the
> binary data at runtime. I guess it needs to be a python byte compiled
> file instead...?
No jython does not never deal with python bytecode.
> bar.py:
> print "hello world"
> print "bye world"
>
> How do I bytecompile bar.py?
> how do I then load and run the bytecompiled bar.py using a
> PythonInterpreter instance?
>
*warning* hack:
org.python.core.PyRunnable bar = new bar._PyInner();
PyObject code = bar.getMain();
interp.exec(code)
or some flavor of what was in the posting cited
above.
regards.
How do I run a piece of frozen python script within an embedded jython
interpreter?
i.e. How do I get the following contrived example to work?
Foo.py:
class Foo:
def __init__(self, blah):
self.blah = blah
def doSomething():
blah.doSomethingElse()
Foo.py is precompiled to Foo.class using jythonc,
PyObject f = new Foo(); // how do I call __init__ with arguments?
PythonInterpreter interp = new PythonInterpreter();
interp.set("foo", f);
interp.exec("foo.doSomething()");
Alternatively, how would I bytecompile a script, and pass the code object
to the interpreter? I tried to use jythonc to create a class file, and
then used that as a source for an InputStream to
PythonInterpreter.exexfile(), but that just gave me syntax errors on the
binary data at runtime. I guess it needs to be a python byte compiled
file instead...?
bar.py:
print "hello world"
print "bye world"
How do I bytecompile bar.py?
how do I then load and run the bytecompiled bar.py using a
PythonInterpreter instance?
Any help appreciated, thanks,
Matt
matt_conway@...
Barnabas Wolf wrote:
>I'm trying to add Python scripting capabilities to an existing Java
>application. I would need to be able to execute scripts fully
>independently in different threads. I've attempted this by creating
>an instance of PythonInterpreter in each of the threads. This approach
>appears to work, but not perfectly. For example, all interpreters
>share the same IO streams -- calling setOut(), setErr(), etc. on any
>of the interpreters will redirect *all* of the interpreters to the
>new stream. This particular problem is not a complete show stopper,
>but it makes me worry about how independent the interpreters really
>are.
I have also tried to use multiple interpreters in different threads.
That was a while back using JPython 1.1. At that time, the different
threads corrupted each other in short order. I have not tried again
recently, but after looking inside the jython code, I see that quite a
lot of state information is stored statically, which leads me to believe
that jython was not designed for multithreaded used in this way. If so,
that is a shame, as this would be very useful.
Perhaps someone knows otherwise and can tell us what we are doing wrong?
-Paul
--
Paul Giotta
Software Architect
paul.giotta@...
Office: +41 1 445 2370 | Fax: +41 1 445 2372 | Mobile: +41 76 389 1180
Technoparkstr.1, 8005 Zurich, Switzerland |
* e2e Java Messaging, Pure and Simple. * | http://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200110&viewday=1 | CC-MAIN-2014-35 | refinedweb | 781 | 68.36 |
Seismo-Live:
This notebook is a very quick introduction to Python and in particular its scientific ecosystem in case you have never seen it before. It furthermore grants a possibility to get to know the IPython/Jupyter notebook. See here for the official documentation of the Jupyter notebook - a ton more information can be found online.
A lot of motivational writing on Why Python? is out there so we will not repeat it here and just condense it to a single sentence: Python is a good and easy to learn, open-source, general purpose programming language that happens to be very good for many scientific tasks (due to its vast scientific ecosystem).
Shift + Enter: Execute cell and jump to the next cell
Ctrl/Cmd + Enter: Execute cell and don't jump to the next cell
The tutorials are employing Jupyter notebooks but these are only one way of using Python. Writing scripts to text files and executing them with the Python interpreter of course also works:
$ python do_something.py
Another alternative is interactive usage on the command line:
$ ipython
First things first: In many notebooks you will find a cell similar to the following one. Always execute it! They do a couple of things:
print("Hello")
This essentially makes the notebooks work under Python 2 and Python 3.
# Plots now appear in the notebook. %matplotlib inline from __future__ import print_function, division # Python 2 and 3 are now very similar import matplotlib.pyplot as plt plt.style.use('ggplot') # Matplotlib style sheet - nicer plots! plt.rcParams['figure.figsize'] = 12, 8 # Slightly bigger plots by default
Here is collection of resources regarding the scientific Python ecosystem. They cover a number of different packages and topics; way more than we will manage today.
If you have any question regarding some specific Python functionality you can consult the official Python documenation.
Furthermore a large number of Python tutorials, introductions, and books are available online. Here are some examples for those interested in learning more.
Some people might be used to Matlab - this helps:
Additionally there is an abundance of resources introducing and teaching parts of the scientific Python ecosystem.
You might eventually have a need to create some custom plots. The quickest way to success is usually to start from some example that is somewhat similar to what you want to achieve and just modify it. These websites are good starting points:
This course is fairly non-interactive and serves to get you up to speed with Python assuming you have practical programming experience with at least one other language. Nonetheless please change things and play around an your own - it is the only way to really learn it!
The first part will introduce you to the core Python language. This tutorial uses Python 3 but almost all things can be transferred to Python 2. If possible choose Python 3 for your own work!
Python is dynamically typed and assigning something to a variable will give it that type.
# Three basic types of numbers a = 1 # Integers b = 2.0 # Floating Point Numbers c = 3.0 + 4j # Complex Numbers, note the use of j for the complex part # Arithmetics work as expected. # Upcasting from int -> float -> complex d = a + b # (int + float = float) print(d) e = c ** 2 # c to the second power, performs a complex multiplication print(e)
Just enclose something in single or double quotes and it will become a string. On Python 3 it defaults to unicode strings, e.g. non Latin alphabets and other symbols.
# You can use single or double quotes to create strings. location = "New York" # Concatenate strings with plus. where_am_i = 'I am in ' + location # Print things with the print() function. print(location, 1, 2) print(where_am_i) # Strings have a lot of attached methods for common manipulations. print(location.lower()) # Access single items with square bracket. Negative indices are from the back. print(location[0], location[-1]) # Strings can also be sliced. print(location[4:])
Save your name in all lower-case letters to a variable, and print a capitalized version of it. Protip: Google for "How to capitalize a string in python". This works for almost any programming problem - someone will have had the same issue before!
Python has two main collection types: List and dictionaries. The former is just an ordered collection of objects and is introduced here.
# List use square brackets and are simple ordered collections of things. everything = [a, b, c, 1, 2, 3, "hello"] # Access elements with the same slicing/indexing notation as strings. # Note that Python indices are zero based! print(everything[0]) print(everything[:3]) print(everything[2:-2]) # Negative indices are counted from the back of the list. print(everything[-3:]) # Append things with the append method. everything.append("you") print(everything)
# Dictionaries have named fields and no inherent order. As is # the case with lists, they can contain anything. information = { "name": "Hans", "surname": "Mustermann", "age": 78, "kids": [1, 2, 3] } # Acccess items by using the key in square brackets. print(information["kids"]) # Add new things by just assigning to a key. print(information) information["music"] = "jazz" print(information) # Delete things by using the del operator del information["age"] print(information)
# Functions are defined using the def keyword. def do_stuff(a, b): return a * b # And called with the arguments in round brackets. print(do_stuff(2, 3)) # Python function also can have optional arguments. def do_more_stuff(a, b, power=1): return (a * b) ** power print(do_more_stuff(2, 3)) print(do_more_stuff(2, 3, power=3)) # For more complex function it is oftentimes a good idea to #explicitly name the arguments. This is easier to read and less error-prone. print(do_more_stuff(a=2, b=3, power=3))
# Import anything, and use it with the dot accessor. import math a = math.cos(4 * math.pi) # You can also selectively import things. from math import pi b = 3 * pi # And even rename them if you don't like their name. from math import cos as cosine c = cosine(b)
print(dir(math))
Typing the dot and the TAB will kick off tab-completion.
math.
In the IPython framework you can also use a question mark to view the documentation of modules and functions.
math.cos?
Loops and conditionals are needed for any non-trivial task. Please note that whitespace matters in Python. Everything that is indented at the same level is part of the same block. By far the most common loops in Python are for-each loops as shown in the following. While loops also exist but are rarely used.
temp = ["a", "b", "c"] # The typical Python loop is a for-each loop, e.g. for item in temp: # Everything with the same indentation is part of the loop. new_item = item + " " + item print(new_item) print("No more part of the loop.")
# Useful to know is the range() function. for i in range(5): print(i)
The second crucial control flow structure are if/else conditional and they work the same as in any other language.
# If/else works as expected. age = 77 if age >= 0 and age < 10: print("Younger ten.") elif age >= 10: print("Older than ten.") else: print("wait what?")
# List comprehensions are a nice way to write compact loops. # Make sure you understand this as it is very common in Python. a = list(range(10)) print(a) b = [i for i in a if not i % 2] print(b) # Equivalant loop for b. b = [] for i in a: if not i % 2: b.append(i) print(b)
def do_something(a, b): print(a + b + something_else) do_something(1, 2)
The SciPy Stack forms the basis for essentially all applications of scientific Python. Here we will quickly introduce the three core libraries:
NumPy
SciPy
Matplotlib
The SciPy stack furthermore contains
pandas (library for data analysis on tabular and time series data) and
sympy (package for symbolic math), both very powerful packages, but we will omit them in this tutorial.
import numpy as np # Create a large array with with 1 million samples. x = np.linspace(start=0, stop=100, num=1E6, dtype=np.float64) # Most operations work per-element. y = x ** 2 # Uses C and Fortran under the hood for speed. print(y.sum()) # FFT and inverse x = np.random.random(100) large_X = np.fft.fft(x) x = np.fft.ifft(large_X)
SciPy, in contrast to
NumPy which only offers basic numerical routines, contains a lot of additional functionality needed for scientific work. Examples are solvers for basic differential equations, numeric integration and optimization, spare matrices, interpolation routines, signal processing methods, and a lot of other things.
from scipy.interpolate import interp1d x = np.linspace(0, 10, num=11, endpoint=True) y = np.cos(-x ** 2 / 9.0) # Cubic spline interpolation to new points. f2 = interp1d(x, y, kind='cubic')(np.linspace(0, 10, num=101, endpoint=True))
import matplotlib.pyplot as plt plt.plot(np.sin(np.linspace(0, 2 * np.pi, 2000)), color="green", label="Some Curve") plt.legend() plt.ylim(-1.1, 1.1) plt.show()
! | https://nbviewer.jupyter.org/github/obspy/docs/blob/master/workshops/2017-10-25_iris_stcu/Python%20Introduction/Python_Crash_Course.ipynb | CC-MAIN-2018-39 | refinedweb | 1,498 | 66.74 |
RelativeNodePath QML Type
Specifies a relative node path element. More...
Properties
- browseName : NodeId
- includeSubtypes : bool
- isInverse : bool
- ns : NodeId
- referenceType : QOpcUa::ReferenceTypeId
Detailed Description
import QtOpcUa 5.13 as QtOpcUa QtOpcUa.RelativeNodePath { ns: "Test Namespace" browseName: "SomeName" }
See also Node, NodeId, and RelativeNodeId.
Property Documentation
Browse name of this path element.
Whether subtypes are included when matching this path element. The default value of this property is
true.
Whether the reference to follow is inverse. The default value of this property is
false.
Namespace name of this path element. The identifier can be the index as a number or the name as string. A string which can be converted to an integer is considered a namespace index.
Type of reference when mathing this path element. This can be a QOpcUa::ReferenceTypeId or a NodeId. The default value of this property is
Constants.ReferenceTypeId. | https://doc-snapshots.qt.io/qt5-5.15/qml-qtopcua-relativenodepath.html | CC-MAIN-2022-27 | refinedweb | 143 | 53.68 |
Sugar Pie
Question: in the code snippet bleow, what does the result stream,
rs, approximate?
from itertools import count, ifilter, izip from random import random as xy from math import hypot pt = lambda: (xy(), xy()) on = ifilter(lambda n: hypot(*pt()) < 1., count(1)) rs = (4. * j / i for i, j in izip(on, count(1)))
The code isn’t wilfully obscure but I’ll admit it’s unusual. Although written in a functional style, the source of the stream,
pt, is utterly impure, generating a sequence of random results: it sprinkles points in a unit square. Despite this random input the results stream always tends to the same value. Well, in theory it should!
Here’s a picture of a round pie on a square baking tray being dusted with sugar.
Thanks again to Marius Gedminas for pointing me at
math.hypot, the best way to find the length of a 2D vector. (The
previous version of this note used
abs(complex(*pt()), which it
claimed to be better than
math.sqrt(x * x + y * y)). | http://wordaligned.org/articles/sugar-pie | CC-MAIN-2015-06 | refinedweb | 177 | 74.59 |
Add ability to read bytes from objc.varlist
Use case: CGBitmapContextGetData returns a pointer to raw data, several MBs in size. I would like to pass it to NumPy for further processing. Currently, it gets converted into a tuple of millions of single-byte objects.
A simple solution would be adding a method
def as_bytes(self, count: int) -> bytes.
Another alternative is to implement Buffer Protocol
Adding as_bytes would be useful, and I'll look into this. I guess as_memoryview would be more useful because this could avoid copying data.
Implementing the buffer protocol is not possible, this requires knowing the size of the buffer and that's something a varlist object doesn't do.
I'm adding method
as_buffer(self, count: int)that returns a writable memory view object that refers to the same memory as the varlist object (for the first count elements of the list).
This makes it possible to directly write to the underlying memory through the buffer, just like you can write to that memory through the item setter of the varlist object. The user is responsible for checking if the memory is writable by checking the API documentation.
Add
objc.varlist.as_buffer
This fixes
#205
→ <<cset 6afdd6d0ff48>>
This will be in PyObjC 4.0, which will be released around High Sierra's release (if I read Apple's website correct High Sierra is released sept. 25, with the current state of the 10.13 branch that means PyObjC should be released sometime next week)
Removing version: 3.1 (automated comment) | https://bitbucket.org/ronaldoussoren/pyobjc/issues/205/add-ability-to-read-bytes-from-objcvarlist | CC-MAIN-2018-43 | refinedweb | 256 | 64.71 |
#include <wx/textctrl.h>
This class can be used to (temporarily) redirect all output sent to a C++ ostream object to a wxTextCtrl instead.
std::streambufin which case this class is not compiled in. You also must have
wxUSE_STD_IOSTREAMoption on (i.e. set to 1) in your
setup.hto be able to use it. Under Unix, specify
–enable-std_iostreamsswitch when running configure for this.
Example of usage:
The constructor starts redirecting output sent to ostr or cout for the default parameter value to the text control text.
When a wxStreamToTextRedirector object is destroyed, the redirection is ended and any output sent to the C++ ostream which had been specified at the time of the object construction will go to its original destination. | https://docs.wxwidgets.org/3.0/classwx_stream_to_text_redirector.html | CC-MAIN-2021-17 | refinedweb | 122 | 56.76 |
Writing Java Programs That Use the Wolfram Language
Introduction
The first part of this User Guide describes using J/Link to allow you to call from the Wolfram Language into Java, thereby extending the Wolfram Language environment to include the functionality in all existing and future Java classes. This part shows you how to use J/Link in the opposite direction, as a means to write Java programs that use the Wolfram Language kernel as a computational engine.
J/Link uses the Wolfram Symbolic Transfer Protocol (WSTP), Wolfram Research's protocol for sending data and commands between programs. Many of the concepts and techniques in J/Link programming are the same as those for programming with the WSTP C-language API. The J/Link documentation is not intended to be an encyclopedic compendium of everything you need to know to write Java programs that use WSTP. Programmers may have to rely a little on the general documentation of WSTP programming. Many of the functions J/Link provides have C-language counterparts that are identical or nearly so.
If you have not read "Calling Java from the Wolfram Language", you should at least skim it at some point. Your Java "front end" can use the same techniques for calling Java methods from Wolfram Language code and passing Java objects as arguments that programmers use when running the kernel from the notebook front end. This allows you to have a very high-level interface between Java and the Wolfram Language. When you are writing WSTP programs in C, you have to think about passing and returning simple things like strings and integers. With J/Link you can pass Java objects back and forth between Java and the Wolfram Language. J/Link truly obliterates the boundary between Java and the Wolfram Language.
This half of the User Guide is organized as follows. "What Is WSTP?" is a very brief introduction to WSTP. The section "Preamble" introduces the most important J/Link interfaces and classes. "Sample Program" presents a simple example program. "Creating Links with MathLinkFactory" shows how to launch the Wolfram Language and create links. "The MathLink Interface" and "The KernelLink Interface" give a listing of methods in the large and all-important MathLink and KernelLink interfaces. The methods are grouped by function, and there is some commentary mixed in. This treatment does not replace the actual JavaDoc help files for J/Link, found in the JLink/Documentation/JavaDoc directory. The JavaDoc files are the main method-by-method reference for J/Link, and they include all the classes and interfaces that programmers will use. The remaining sections of this User Guide present discussions of a number of important topics in J/Link programming, including how to handle exceptions and get graphics and typeset output.
When you are reading this text or programming in Java or the Wolfram Language, remember that the entire source code for J/Link is provided. If you want to see how anything works (or why it does not), you can always consult the source code directly.
What Is WSTP?
The Wolfram Symbolic Transfer Protocol (WSTP) is a platform-independent protocol for communicating between programs. In more concrete terms, it is a means to send and receive Wolfram Language expressions. WSTP is the means by which the notebook front end and kernel communicate with each other. It is also used by a large number of commercial and freeware applications and utilities that link the Wolfram Language and other programs or languages.
WSTP is implemented as a library of C-language functions. Using it from another language (such as Java) typically requires writing some type of "glue" code that translates between the data types and calling conventions of that language and C. At the core of J/Link is just such a translation layer—a library built using Java's JNI (Java Native Interface) specification.
An old name for WSTP was MathLink, and this explains the appearance of that legacy name in several J/Link classes and interfaces.
Overview of the Main J/Link Interfaces and Classes
Preamble
The J/Link classes are written in an object-oriented style intended to maximize their extensibility in the future without requiring users' code to change. This requires a clean separation between interface and implementation. This is accomplished by exposing the main link functionality through interfaces, not classes. The names of the concrete classes that implement these interfaces will hardly be mentioned because programmers do not need to know or care what they are. Rather, you will use objects that belong to one of the interface types. You do not need to know what the actual classes are because you will never create an instance directly; instead, you use a "factory method" to create an instance of a link class. This will become clear further on.
MathLink and KernelLink
The two most important link interfaces you need to know about are MathLink and KernelLink. The MathLink interface is essentially a port of the WSTP C API into Java. Most of the method names will be familiar to experienced WSTP programmers. KernelLink extends MathLink and adds some important high-level convenience methods that are only meaningful if the other side of the link is a Wolfram Language kernel (for example, the method waitForAnswer(), which assumes the other side of the link will respond with a defined series of packets).
The basic idea is that the MathLink interface encompasses all the operations that can be performed on a link without making any assumptions about what program is on the other side of the link. KernelLink adds the assumption that the other side is a Wolfram Language kernel. In the future, other interfaces could be added that also extend MathLink and encapsulate other conventions for communicating over a link.
KernelLink is the most important interface, as most programmers will work exclusively with KernelLink. Of course, since KernelLink extends MathLink, many of the methods you will use on your KernelLink objects are declared and documented in the MathLink interface.
The most important class that implements MathLink is NativeLink, so named because it uses native methods to call into Wolfram Research's WSTP library. In the future, other classes could be added that do not rely on native methods—for example, one that uses RMI to communicate across a network. As discussed above, most programmers do not need to be concerned about what these classes are, because they will never type a link class name in their code.
MathLinkFactory
MathLinkFactory is the class that you use to create link objects. It contains the static methods createMathLink(), createKernelLink(), and createLoopbackLink(), which take various argument sequences. These are the equivalents of calling WSOpen in a C program. The MathLinkFactory methods are discussed in detail in "Creating Links with MathLinkFactory".
MathLinkException
MathLinkException is the exception class that is thrown by many of the methods in MathLink and KernelLink. The J/Link API uses exceptions to indicate errors, rather than function return values like the WSTP C API. In C, you write code that checks the return values as follows.
In J/Link, you wrap calls in a try block and catch MathLinkException.
Expr
The Expr class provides a direct representation of Wolfram Language expressions in Java. Expr has a number of methods that provide information about the structure of the expression and that let you extract components. These methods have names and behaviors that will be familiar to Wolfram Language programmers—for example, length(), part(), numberQ(), vectorQ(), take(), delete(), and so on. When reading from a link, instead of using the low-level MathLink interface methods for discovering the structure and properties of the incoming expression, you can just read an entire expression from the link using getExpr(), and then use Expr methods to inspect it or decompose it. For writing to a link, Expr objects can be used as arguments to some of the most important KernelLink methods. The Expr class is discussed in detail in "Motivation for the Expr Class".
PacketListener() or discardAnswer(), which hide the packet loop within them. Not only is this a convenience to avoid having to put the same boilerplate code into every program, it is necessary since in some circumstances programmers cannot write a correct packet because special packets may arrive that J/Link needs to handle internally. It is therefore necessary to hide the details of the packet loop from programmers. In some cases, though, programmers will want to observe and/or operate on the incoming flow of packets. A typical example would be to display Print output or messages generated by a computation. These outputs are side effects of a computation and not the "answer", and they are normally discarded by waitForAnswer().
To accommodate this need, KernelLink objects fire a PacketArrivedEvent for each packet that is encountered while running an internal packet loop. You can register your interest in receiving notifications of these packets by creating a class that implements the PacketListener interface and registering an object of this class with the KernelLink object. The PacketListener interface has only one method, packetArrived(), which will be called for each packet. Your packetArrived() method can consume or ignore the packet without affecting the internal packet loop in any way. Very advanced programmers can optionally indicate that the internal packet loop should not see the packet.
The PacketListener interface is discussed in greater detail in "Using the PacketListener Interface".
High-Level User Interface Classes
J/Link includes several classes that are useful for creating programs that have user interfaces. The MathCanvas and MathGraphicsJPanel classes provide an easy way to display Wolfram Language graphics and typeset expressions. These classes are often used from Wolfram Language code, as described in "The MathCanvas and MathGraphicsJPanel Classes", but they are just as useful in Java programs. They are discussed in "MathCanvas and MathGraphicsJPanel". The various "MathListener" classes ("Handling Events with Wolfram Language Code: The 'MathListener' Classes") can be used from Java code to trigger evaluations in the Wolfram Language when user interface actions occur.
New in J/Link 2.0 are the classes in the com.wolfram.jlink.ui package. These classes provide some very high-level user interface elements. There is the ConsoleWindow class, which gives you a console output window (this is the class used to implement the Wolfram Language function ShowJavaConsole, discussed in "The Java Console Window"). The InterruptDialog class gives you an Interrupt Evaluation dialog that lets you interrupt or abort computations. The MathSessionPane class provides an In/Out Wolfram System session window complete with a full set of editing functions including cut/copy/paste/undo/redo, support for graphics, syntax coloring, and customizable font styles. The auxiliary classes SyntaxTokenizer and BracketMatcher are used by MathSessionPane, but can also be used separately to provide these services in your own programs. All these classes are discussed in the section "Some Special User Interface Classes: Introduction".
Sample Program
Here is a basic Java program that launches the Wolfram Language kernel, uses it for some computations, and then shuts it down. This program is provided in source code and compiled form in the JLink/Examples/Part2 directory. The usual WSTP arguments including the path to the kernel are given on the command line you use to launch the program, and some typical examples are given below. You will have to adjust the Wolfram Language kernel path for your system. If you have your CLASSPATH environment variable set to include JLink.jar, then you can leave off the -classpath specification in these command lines. It is assumed that these commands are executed from the JLink/Examples/Part2 directory.
(Windows)
java -classpath .;..\..\JLink.jar SampleProgram -linkmode launch -linkname "c:\program files\wolfram research\mathematica\10.0\mathkernel.exe"
(Linux)
java -classpath .:../../JLink.jar SampleProgram -linkmode launch -linkname 'math -mathlink'
(Mac OS X from a terminal window)
java -classpath .:../../JLink.jar SampleProgram -linkmode launch -linkname '"/Applications/Mathematica.app/Contents/MacOS/MathKernel" -mathlink'
Here is the code from SampleProgram.java. This program demonstrates launching the kernel with MathLinkFactory.createKernelLink(), and several different ways to send computations to the Wolfram Language and read the result.
import com.wolfram.jlink.*;
public class SampleProgram {
public static void main(String[] argv) {
KernelLink ml = null;
try {
ml = MathLinkFactory.createKernelLink(argv);
} catch (MathLinkException e) {
System.out.println("Fatal error opening link: " + e.getMessage());
return;
}
try {
// Get rid of the initial InputNamePacket the kernel will send
// when it is launched.
ml.discardAnswer();
ml.evaluate("<<MyPackage.m");
ml.discardAnswer();
ml.evaluate("2+2");
ml.waitForAnswer();
int result = ml.getInteger();
System.out.println("2 + 2 = " + result);
// Here's how to send the same input, but not as a string:
ml.putFunction("EvaluatePacket", 1);
ml.putFunction("Plus", 2);
ml.put(3);
ml.put(3);
ml.endPacket();
ml.waitForAnswer();
result = ml.getInteger();
System.out.println("3 + 3 = " + result);
// If you want the result back as a string, use evaluateToInputForm
// or evaluateToOutputForm. The second arg for either is the
// requested page width for formatting the string. Pass 0 for
// PageWidth->Infinity. These methods get the result in one
// step--no need to call waitForAnswer.
String strResult = ml.evaluateToOutputForm("4+4", 0);
System.out.println("4 + 4 = " + strResult);
} catch (MathLinkException e) {
System.out.println("MathLinkException occurred: " + e.getMessage());
} finally {
ml.close();
}
}
}
Creating Links with MathLinkFactory
To isolate clients of the J/Link classes from implementation details it is required that clients never explicitly name a link class in their code. This means that programs will never call new to create an instance of a link class. Instead, a so-called "factory method" is supplied that creates an appropriate instance for you, based on the arguments you pass in. This factory method takes the place of calling WSOpen in a C program.
The method that creates a KernelLink is a static method called createKernelLink() in the MathLinkFactory class.
public static KernelLink createKernelLink(String cmdLine) throws MathLinkException
public static KernelLink createKernelLink(String[] argv) throws MathLinkException
. . . plus a few more of limited usefulness
There are also two functions called createMathLink() that take the same arguments but create a MathLink instead of a KernelLink. Very few programmers will need to use createMathLink() because the only reason to do so is if you are connecting to a program other than the Wolfram Language kernel. See the JavaDoc files for a complete listing of the methods.
The second signature of createKernelLink() is convenient if you are using the command-line parameters that your program was launched with, which are, of course, provided to your main() function as an array of strings. An example of this use can be found in the sample program in the section "Sample Program". Other times it will be convenient to specify the parameters as a single string.
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname 'c:\\program files\\wolfram research\\mathematica\\10.0\\mathkernel'");
Note that the linkname argument is wrapped in single quotation marks ('). This is because WSTP parses this string as a complete command line, and wrapping it in single quotation marks is an easy way to force it to be seen as just a file name. Also note that it is required to type two backslashes to indicate a Windows directory separator character when you are typing a literal string in your Java code because Java, like C and the Wolfram Language, treats the \ as a meta-character that quotes the character following.
Here are some typical arguments for createKernelLink() on various platforms when given as a single string. Note the use of quote characters (' and ").
// Typical launch on Windows
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname 'c:\\program files\\wolfram research\\mathematica\\10.0\\mathkernel.exe'");
// Typical launch on Linux
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname 'math -mathlink'");
// Typical launch on Mac OS X
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname '\"/Applications/Mathematica.app/Contents/MacOS/MathKernel\" -mathlink'");
// Typical "listen" link on any platform:
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode listen -linkname foo");
Here are typical arguments for createKernelLink() when given as an array of strings.
// Typical launch on Windows:
String[] argv = {"-linkmode", "launch", "-linkname", "c:\\program files\\wolfram research\\mathematica\\10.0\\mathkernel"};
// Typical launch on Linux:
String[] argv = {"-linkmode", "launch", "-linkname", "math -mathlink"};
// Typical launch on Mac OS X:
String[] argv = {"-linkmode", "launch", "-linkname", "\"/Applications/Mathematica.app/Contents/MacOS/MathKernel\" -mathlink"};
// Typical "listen" link on any platform:
String[] argv = {"-linkmode", "listen", "-linkname", "foo"};
The arguments for createKernelLink() and createMathLink() (e.g. -linkmode, -linkprotocol, and so on) are identical to those used for WSOpen in the WSTP C API. Consult the WSTP documentation for more information.
The createKernelLink() and createMathLink() methods will always return a link object that is not null or throw a MathLinkException. You do not need to test whether the returned link is null. Because these methods throw a MathLinkException on failure, you need to wrap the call in a try block.
KernelLink ml = null;
try {
ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname 'c:\\program files\\wolfram research\\mathematica\\10.0\\mathkernel'");
} catch (MathLinkException e) {
// This is equivalent to WSOpen returning NULL in a C program.
System.out.println(e.getMessage());
System.exit(1);
}
The fact that createKernelLink() succeeds does not mean that the link is connected and functioning properly. There are a lot of things that could be wrong. For example, if you launch a program that knows nothing about WSTP, createKernelLink() will still succeed. There is a difference between creating a link (which involves setting up your side) and connecting one (which verifies that the other side is alive and well).
If a link has not been connected yet, WSTP will automatically try to connect it the first time you try to read or write something. Alternatively, you can call the connect() method to explicitly connect the link after creating it. If the link cannot be connected, then the attempt to connect, whether made explicitly by you or internally by WSTP, will fail or even hang indefinitely. It can hang because the attempt to connect will block until the connection succeeds or until it detects a fatal problem with the link. In some cases, neither will happen—for example, if you mistakenly launch a program that is not WSTP-aware. Dealing with blocking in J/Link methods is discussed more thoroughly later, but in the case of connecting the link you have an easy solution. The connect() method has a second signature that takes a long argument specifying the number of milliseconds to wait before abandoning the attempt to connect: connect(long timeoutMillis). You do not need to explicitly call connect() on a link—it will be connected for you the first time you try to read something. You can use a call to connect() to catch failures at a well-defined place, or if you want to use the automatic time-out feature. Here is a code fragment that demonstrates how to implement a time out in connect().
KernelLink ml = null;
try {
ml = MathLinkFactory.createKernelLink("-linkmode launch -linkname 'c:\\program files\\wolfram research\\mathematica\\10.0\\mathkernel'");
} catch (MathLinkException e) {
System.out.println("Link could not be created: " + e.getMessage());
return; // Or whatever is appropriate.
}
try {
connect(10000); // Wait at most 10 seconds
} catch (MathLinkException e) {
// If the timeout expires, a MathLinkException will be thrown.
System.out.println("Failure to connect link: " + e.getMessage());
ml.close();
return; // Or whatever is appropriate.
}
When you are finished with a link, call its close() method. Although the finalizer for a link object will close the link, you cannot guarantee that the finalizer will be called in a timely fashion, or even at all, so you should always manually close a link when you are done.
Using Listen and Connect Modes
You can use the listen and connect linkmodes, instead of launch, if you want to connect to an already-running program. Using listen and connect linkmodes in J/Link works in the same way as with C WSTP programs. See the MathLink Tutorial () or "WSTP and External Program Communication" for more information.
Using a Remote Kernel
To attach a remote Wolfram Language kernel to a J/Link program, open the link using the listen/connect style. On the remote machine, launch the Wolfram Language and have it listen on a link by executing the following on a command line.
Then in your Java program, use the following. the Wolfram Language. You can have the Java program automatically launch the Wolfram Language on the remote machine by using an rsh or ssh client program. Linux and OSX machines have rsh and ssh built in, and the Wolfram Language ships with the winrsh client program for Windows. Here is an example of using winrsh to launch and connect to the Wolfram Language on a remote Linux machine.
KernelLink ml = MathLinkFactory.createKernelLink("-linkmode listen -linkprotocol tcpip -linkname 1234");
Runtime.exec("c:\\program files\\wolfram research\\mathematica\\10.0\\systemfiles\\frontend\\binaries\\windows\\winrsh -m -q -h -l YourUsername -'math -mathlink -linkmode connect -linkprotocol tcpip -linkname 1234@localmachinename'");
The MathLink Interface
MathLink is the low-level interface that is the root of all link objects in J/Link. The methods in MathLink correspond roughly to a subset of those in the C-language WSTP API. Most programmers will deal instead with objects of type KernelLink, a higher-level interface that extends MathLink and incorporates the assumption that the program on the other side of the link is a Wolfram Language kernel.
There will not be much said here about most of these methods, as they behave like their C API counterparts in most respects. The JavaDoc help files are the main method-by-method documentation for all the J/Link classes and interfaces. They can be found in the JLink/Documentation/JavaDoc directory. This section is provided mainly for those who want to skim a traditional listing.
These are all public methods (the public has been left off to keep lines short).
Managing Links
void close();
void connect() throws MathLinkException;
// Wait at most timeoutMillis for the connect to occur, then throw a MathLinkException
void connect(long timeoutMillis) throws MathLinkException;
//A synonym for connect. This is the newer name.
void activate() throws MathLinkException;
Packet Functions
//Does not throw exception because it will often be needed in a catch block.
void newPacket();
int nextPacket() throws MathLinkException;
void endPacket() throws MathLinkException;
Error Handling
Link State
Putting
Putting expressions on the link is a bit different in Java than C because Java lets you overload functions. Thus, there is no need to have methods with names like the C functions WSPutInteger and WSPutDouble; it suffices to have a single function named put() that has different definitions for each argument type. The only exceptions to this are the few cases where the argument needs to be interpreted in a special way. For example, there are three "put" methods that take a single string argument: put() (equivalent to the C-language function WSPutUCS2String), putSymbol() (equivalent to WSPutUCS2Symbol), and putByteString() (equivalent to WSPutByteString).
For numeric types, there are the following methods (there is no need to provide a put() method for byte, char, and short types, as these can be automatically promoted to int):
void put(int i) throws MathLinkException;
void put(long i) throws MathLinkException;
void put(double d) throws MathLinkException;
For strings and symbols, use the following.
void put(String s) throws MathLinkException;
void putByteString(byte[] b) throws MathLinkException;
void putSymbol(String s) throws MathLinkException;
All the J/Link methods that put or get strings use Unicode, which is the native format for Java strings.
For Booleans, a Java true is sent as the Wolfram Language symbol True, and False for Java false.
There is also a put() method for arbitrary Java objects. In the default implementation, this does not do anything very useful for most objects (what it does is send obj.toString()). A handful of objects, however, have a meaningful representation to the Wolfram Language. These are arrays, strings, Expr objects (discussed elsewhere), and instances of the so-called "wrapper" classes (Integer, Double, Character, and so on), which hold single numeric values. Arrays are sent as lists, strings are sent as Wolfram Language strings, and the wrapper classes are sent as their numeric value. (The last case is for complex numbers, which will be discussed later.)
There is a special method for arrays that lets you specify the heads of the array in each dimension. The heads are passed as an array of strings. Note that unlike the C counterparts (WSPutInteger32Array, WSPutReal64Array, and so on), you do not have to specify the depth or dimensions because they can be inferred from the array itself.
Use the following for putting Wolfram Language functions.
Use the following for transferring expressions from one link to another (the "this" link is the destination)
void transferExpression(MathLink source) throws MathLinkException;
void transferToEndOfLoopbackLink(LoopbackLink source) throws MathLinkException;
Use this for a low-level "textual interface".
void putNext(int type) throws MathLinkException;
void putArgCount(int argCount) throws MathLinkException;
void putSize(int size) throws MathLinkException;
int bytesToPut() throws MathLinkException;
void putData(byte[] data) throws MathLinkException;
void putData(byte[] data, int len) throws MathLinkException;
Getting
Because you cannot overload methods on the basis of return type, there is no catchall get() method for reading from the link, as is the case with the put() method. Instead, there are separate methods for each data type. Notice that unlike their counterparts in the C API, these methods return the actual data that was read, not an error code (exceptions are used for errors, as with all the methods).
int getInteger() throws MathLinkException;
long getLongInteger() throws MathLinkException;
double getDouble() throws MathLinkException;
String getString() throws MathLinkException;
byte[] getByteString(int missing) throws MathLinkException;
String getSymbol() throws MathLinkException;
boolean getBoolean() throws MathLinkException;
Arrays of the nine basic types (boolean, byte, char, short, int, long, float, double, String), as well as complex numbers, can be read with a set of methods of the form getXXXArrayN(), where XXX is the data type and N specifies the depth of the array. For each type there are two methods like the following examples for int. There is no way to get the heads of the array using these functions (it will typically be "List" at every level). If you need to get the heads as well, you should use getExpr() to read the expression as an Expr and then examine it using the Expr methods.
int[] getIntArray1() throws MathLinkException;
int[][] getIntArray2() throws MathLinkException;
... and others for all the eight primitive types and String and the complex class
Note that you do not have to know exactly how deep the array is to use these functions. If you call, say, getFloatArray1(), and what is actually on the link is a matrix of reals, then the data will be flattened into the requested depth (a one-dimensional array in this case). Unfortunately, if you do this you cannot determine what the original depth of the data was. If you call a function that expects an array of depth greater than the actual depth of the array on the link, it will throw a MathLinkException.
If you need to read an array of depth greater than 2 (but a maximum of 5) , you can use the getArray() method. The getXXXArrayN() methods already discussed are just convenience methods that use getArray() internally. The type argument must be one of TYPE_BOOLEAN, TYPE_BYTE, TYPE_CHAR, TYPE_SHORT, TYPE_INT, TYPE_LONG, TYPE_FLOAT, TYPE_DOUBLE, TYPE_STRING, TYPE_EXPR, TYPE_BIGINTEGER, TYPE_BIGDECIMAL, or TYPE_COMPLEX.
Object getArray(int type, int depth) throws MathLinkException;
Object getArray(int type, int depth, String[] heads) throws MathLinkException;
Unlike the C WSTP API, there are no methods for "releasing" strings or arrays because this is not necessary. When you read a string or array off the link, your program gets its own copy of the data, so you can write into it if you desire (although Java strings are immutable).
The getFunction() method needs to return two things: the head and the argument count. Thus, there is a special class called MLFunction that encapsulates both these pieces of information, and this is what getFunction() returns. The MLFunction class is documented later.
MLFunction getFunction() throws MathLinkException;
// Returns the function's argument count. Throws MathLinkException if the function
// is not the specified one.
int checkFunction(String f) throws MathLinkException;
//Throws an exception if the incoming function does not have this head and arg count.
void checkFunctionWithArgCount(String f, int argCount) throws MathLinkException;
These methods support the low-level interface for reading from a link.
int getNext() throws MathLinkException;
int getType() throws MathLinkException;
int getArgCount() throws MathLinkException;
int bytesToGet() throws MathLinkException;
byte[] getData(int len) throws MathLinkException;
Use the following for reading Expr objects.
public Expr getExpr() throws MathLinkException;
// Gets an expression off the link, then resets the link to the state
// prior to reading the expr. You can "peek" ahead without consuming anything
// off the link.
public Expr peekExpr() throws MathLinkException;
Messages
The messages referred to by the following functions are not Wolfram System warning messages, but a low-level type of WSTP communication used mainly to send interrupt and abort requests. The getMessage() and messageReady() methods no longer function in J/Link 2.0 and higher. You must use setMessageHandler() if you want to receive messages from the Wolfram Language.
int getMessage() throws MathLinkException;
void putMessage(int msg) throws MathLinkException;
boolean messageReady() throws MathLinkException;
Marks
long createMark() throws MathLinkException;
//Next two don't throw, since they are often used in cleanup operations in catch handlers.
void seekMark(long mark);
void destroyMark(long mark);
Complex Class
The setComplexClass() method lets you assign the class that will be mapped to complex numbers in the Wolfram Language. "Mapped" means that the put(Object) method will send a Wolfram Language complex number when you call it with an object of your complex class, and getComplex() will return an instance of this class. For further discussion about this subject and the restrictions on the classes that can be used as the complex class, see "Complex Numbers".
public boolean setComplexClass(Class cls);
public Class getComplexClass();
public Object getComplex() throws MathLinkException;
Yield and Message Handlers
The setYieldFunction() and addMessageHandler() methods take a class, an object, and a method name as a string. The class is the class that contains the named method, and the object is the object of that class on which to call the method. Pass null for the object if it is a static method. The signature of the method you use in setYieldFunction() must be V(Z); for addMessageHandler() it must be II(V). See "Threads, Blocking, and Yielding" for more information and examples.
public boolean setYieldFunction(Class cls, Object obj, String methName);
public boolean addMessageHandler(Class cls, Object obj, String methName);
public boolean removeMessageHandler(String methName);
Constants
The MathLink class also includes the full set of user-level constants from WSTP.h. They have exactly the same names in Java as in C. In addition, there are some J/Link-specific constants.
static int ILLEGALPKT;
static int CALLPKT;
static int EVALUATEPKT;
static int RETURNPKT;
static int INPUTNAMEPKT;
static int ENTERTEXTPKT;
static int ENTEREXPRPKT;
static int OUTPUTNAMEPKT;
static int RETURNTEXTPKT;
static int RETURNEXPRPKT;
static int DISPLAYPKT;
static int DISPLAYENDPKT;
static int MESSAGEPKT;
static int TEXTPKT;
static int INPUTPKT;
static int INPUTSTRPKT;
static int MENUPKT;
static int SYNTAXPKT;
static int SUSPENDPKT;
static int RESUMEPKT;
static int BEGINDLGPKT;
static int ENDDLGPKT;
static int FIRSTUSERPKT;
static int LASTUSERPKT;
//These next two are unique to J/Link.
static int FEPKT;
static int EXPRESSIONPKT;
static int MLTERMINATEMESSAGE;
static int MLINTERRUPTMESSAGE;
static int MLABORTMESSAGE;
static int MLTKFUNC;
static int MLTKSTR;
static int MLTKSYM;
static int MLTKREAL;
static int MLTKINT;
static int MLTKERR;
//Constants for use in getArray()
static int TYPE_BOOLEAN;
static int TYPE_BYTE;
static int TYPE_CHAR;
static int TYPE_SHORT;
static int TYPE_INT;
static int TYPE_LONG;
static int TYPE_FLOAT;
static int TYPE_DOUBLE;
static int TYPE_STRING;
static int TYPE_BIGINTEGER;
static int TYPE_BIGDECIMAL;
static int TYPE_EXPR;
static int TYPE_COMPLEX;
The KernelLink Interface
KernelLink is the interface that you will probably use for the links in your programs. These are all public methods, as is always the case with a Java interface. This section provides only a brief summary of the KernelLink methods; it is intended mainly for those who want to skim a traditional listing. The JavaDoc help files are the main method-by-method documentation for all the J/Link classes and interfaces. They can be found in the JLink/Documentation/JavaDoc directory.
Evaluate
The evaluate() method encapsulates the steps needed to put an expression to the Wolfram Language as a string or Expr and get the answer back as an expression. Internally, it uses an EvaluatePacket for sending the expression. The answer comes back in a ReturnPacket, although the waitForAnswer() method opens up the ReturnPacket—all you have to do is read out its contents. You should always use waitForAnswer() or discardAnswer() instead of spinning your own packet loop waiting for a ReturnPacket. See "The WSTP 'Packet Loop'".
Waiting for the Result
Call waitForAnswer() right after evaluate() (or if you manually send calculations wrapped in EvaluatePacket). It will read packets off the link until it encounters a ReturnPacket, which will hold the result. See "The WSTP 'Packet Loop'".
The discardAnswer() method just throws away all the results from the calculation, so the link will be ready for the next calculation. As you may have guessed, it is nothing more than waitForAnswer() followed by newPacket().
The "evaluateTo" Methods
The next methods are extensions of evaluate() that perform the put and the reading of the result. You do not call waitForAnswer() and then read the result yourself. They also do not throw MathLinkException—if there is an error, they clean up for you and return null. The "evaluateTo" in their names indicates that the methods perform the entire process themselves. evaluateToInputForm() returns a string formatted in InputForm at the specified page width. Specify 0 for the page width to get Infinity. evaluateToOutputForm() is exactly like evaluateToInputForm() except that it returns a string formatted in OutputForm. OutputForm results are more attractive for display to the user, but InputForm is required if you want to pass the string back to the Wolfram Language to be used in further computations. The evaluateToImage() method will return a byte[] of GIF data if you give it an expression that returns a graphic, for example, a Plot command. Pass 0 for the dpi, int, and width arguments if you want their Automatic settings in the Wolfram Language. evaluateToTypeset() returns a byte[] of GIF data of the result of the computation, typeset in either StandardForm or TraditionalForm. These methods are discussed in detail in "EvaluateToImage() and EvaluateToTypeset()".FrontEnd);
byte[] evaluateToImage(Expr e, int width, int height, int dpi, boolean useFrontEnd);
byte[] evaluateToTypeset(String s, int pageWidth, boolean useStdForm);
byte[] evaluateToTypeset(Expr e, int pageWidth, boolean useStdForm);
// Returns the exception that caused the most recent "evaluateTo" method to return null
Throwable getLastError();
Sending Java Object References
If you want to send Java objects "by reference" to the Wolfram Language so that Wolfram Language code can call back into your Java runtime via the "installable Java" facility described in "Calling Java from the Wolfram Language", you must first call the enableObjectReferences() method. This is described in "Sending Object References to the Wolfram Language".
Like the MathLink interface, KernelLink has a put() method that sends objects. The MathLink version of this method only sends objects "by value". The KernelLink version behaves just like the MathLink version for those objects that can be sent by value. In addition, though, it sends all other objects by reference. You must have called enableObjectReferences() before calling put() on an object that will be sent by reference. See "Sending Object References to the Wolfram Language".
The next methods are for putting and getting objects by reference (you must have called enableObjectReferences() to use these methods).
public void putReference(Object obj) throws MathLinkException;
Object getObject() throws MathLinkException;
// These two methods from the MathLink interface are enhanced to return MLTKOBJECT if a Java
// object reference is waiting to be read.
int getNext() throws MathLinkException;
int getType() throws MathLinkException;
Interrupting, Aborting, and Abandoning Evaluations
These methods are for aborting and interrupting evaluations. They are discussed in "Aborting and Interrupting Computations".
Support for PacketListeners
These methods support the registering and notification of PacketListener objects. They are discussed in "Using the PacketListener Interface".
void addPacketListener(PacketListener listener);
void removePacketListener(PacketListener listener);
boolean notifyPacketListeners(int pkt);
The handlePacket() Method (Advanced Users Only)
The handlePacket() method is for very advanced users who are writing their own packet loop instead of calling waitForAnswer(), discardAnswer(), or any of the "evaluateTo" methods. See the JavaDocs for more information.
Methods Valid Only for "StdLinks"
Finally, there are some methods that are meaningful only in methods that are themselves called from the Wolfram Language via the "installable Java" functionality described in "Calling Java from the Wolfram Language". These methods are documented in "Writing Your Own Installable Java Classes". You will not use them if you are writing a program that uses the Wolfram Language as a computational engine.
public void print(String s);
public void message(String symtag, String[] args);
public void message(String symtag, String arg);
public void beginManual();
public boolean wasInterrupted();
public void clearInterrupt();
Sending Computations and Reading Results
WSTP Packets
Communication with the Wolfram Language kernel generally takes place in the form of "packets". A WSTP packet is just a Wolfram Language function, albeit one from a set that is recognized and treated specially by WSTP. When you send something to the Wolfram Language to be evaluated, you wrap it in a packet that tells the Wolfram Language that this is a request for something to be computed, and also tells something about how it is to be computed. All output you receive from the Wolfram Language, including the result and any other side effect output like messages, Print output, and graphics, will also arrive wrapped in a packet. The type of packet tells you about the contents.
A WSTP program typically sends a computation to the Wolfram Language wrapped in a special packet, and then reads a succession of packets arriving from the kernel until the one containing the result of the computation arrives. Along the way, packets that do not contain the result can be either discarded without bothering to examine them or they can be "opened" and operated on. Such nonresult packets include TextPacket expressions containing Print output, MessagePacket expressions containing Wolfram System warning messages, DisplayPacket expressions containing PostScript, and several other types.
You can look at existing WSTP documentation for information on the various packet types for sending things to the Wolfram Language and for what the Wolfram Language sends back. In particular, you should look at the MathLink Tutorial (). For most uses, J/Link hides all the details of packet types and how to send and receive them. You only need to read about packet types if you want to do something beyond what the built-in behavior of J/Link provides. This can be useful for many programs.
The WSTP "Packet Loop"
In a C WSTP program, a typical code fragment for sending a computation to the Wolfram Language and throwing away the result might look like the following.
// C code
WSPutFunction(ml, "EvaluatePacket", 1);
WSPutFunction(ml, "ToExpression", 1);
WSPutString(ml, "Needs[\"MyPackage`\"]");
WSEndPacket(ml);
while (WSNextPacket(ml) != RETURNPKT)
WSNewPacket(ml);
WSNewPacket();
After sending the computation (wrapped in an EvaluatePacket), the code enters a while loop that reads and discards packets until it encounters the ReturnPacket, which will contain the result (which will be the symbol Null here). Then it calls WSNewPacket once again to discard the ReturnPacket.
A WSTP program will typically do this same basic operation many times, so J/Link hides it within some higher-level methods in the KernelLink interface. Here is the J/Link equivalent.
The discardAnswer() method discards all packets generated by the computation until it encounters the one containing the result, and then discards that one too. There is a related method, waitForAnswer(), that discards everything up until the result is encountered. When waitForAnswer() returns, the ReturnPacket has been opened and you are ready to read out its contents. You can probably guess that discardAnswer() is just waitForAnswer() followed by newPacket().
Not only is it a convenience to hide the packet loop within waitForAnswer() and discardAnswer(), it is necessary in some circumstances, since special packets may arrive that J/Link needs to handle internally. Although J/Link has nextPacket() and newPacket() methods, programmers should not write nextPacket()/newPacket() loops like the one in the last C code fragment. Stick to calling waitForAnswer(), discardAnswer(), or using the "evaluateTo" methods discussed in the next section. If you really need to know about all the packets that arrive in your program, use the PacketListener interface, discussed in "Using the PacketListener Interface".
Sending an Evaluation
J/Link provides three main ways to send an expression to the Wolfram Language for evaluation. All three techniques are demonstrated in the sample program in the section "Sample Program".
If you do not care about the result of the evaluation, or if you want the result to arrive in a form other than a string or image, you can use the evaluate() method.
You can send the expression "manually", like in a traditional C WSTP program, by putting the EvaluatePacket head followed by the parts of the expression using low-level methods from the MathLink interface.
If you want the result back as a string or image, you can use the "evaluateTo" methods, which provide a very high-level and convenient interface.
The "evaluateTo" methods are recommended for their convenience, if you want the result back in one of the formats that that they provide. These methods are discussed in "The 'evaluateTo' Methods". If the expression you want evaluated is in the form of a string or Expr (the Expr class is discussed in "Motivation for the Expr Class"), or can be easily converted into one, then you will want to use the evaluate() method. If none of these convenience methods are appropriate, you can put the expression piece by piece similar to a traditional C WSTP program. You do this by sending pieces in a structure that mirrors the FullForm of the expression. Here is a comparison of using these three techniques for sending the computation NIntegrate[x2+y2,{x,-1,1},{y,-1,1}].
String strResult = ml.evaluateToInputForm("NIntegrate[x^2 + y^2, {x,-1,1}, {y,-1,1}]");
ml.evaluate("NIntegrate[x^2 + y^2, {x,-1,1}, {y,-1,1}]");
ml.waitForAnswer();
double doubleResult1 = ml.getDouble();
// It is convenient to use indentation to indicate the structure
ml.putFunction("EvaluatePacket", 1);
ml.putFunction("NIntegrate", 3);
ml.putFunction("Plus", 2);
ml.putFunction("Power", 2);
ml.putSymbol("x");
ml.put(2);
ml.putFunction("Power", 2);
ml.putSymbol("y");
ml.put(2);
ml.putFunction("List", 3);
ml.putSymbol("x");
ml.put(-1);
ml.put(1);
ml.putFunction("List", 3);
ml.putSymbol("y");
ml.put(-1);
ml.put(1);
ml.endPacket();
ml.waitForAnswer();
double doubleResult2 = ml.getDouble();
Reading the Result
Before diving into reading expressions from a link, keep in mind that if you just want the result back as a string or an image, then you are better off using one of the "evaluateTo" methods described in the next section. These methods send a computation and return the answer as a string or image, so you do not have to read it off the link yourself. Also, if you are not interested in the result, you will use discardAnswer() and thus not have to bother reading it.
J/Link provides a number of methods for reading expressions from a link. Many of these methods are essentially identical to functions in the WSTP C API, so to some extent you can learn how to use them by reading standard WSTP documentation. You should also consult the J/Link JavaDoc files for more information. The reading methods generally begin with "get". Examples are getInteger(), getString(), getFunction(), and getDoubleArray2(). There are also two type-testing methods that will tell you the type of the next thing waiting to be read off the link. These methods are getType() and getNext().
As stated earlier, one method you will generally not call is nextPacket(). When waitForAnswer() returns, nextPacket() has already been called internally on the ReturnPacket that holds the answer, so this final packet has already been "opened" and you can start reading its contents right away.
The vast majority of MathLinkException occurrences in J/Link programs are caused by trying to read the incoming expression in a manner that is not appropriate for its type. A typical example is calling a Wolfram Language function that you expect to return an integer, but you call it with incorrect arguments and therefore it returns unevaluated. You call getInteger() to read an integer, but what is waiting on the link is a function like foo[badArgument]. There are several general ways for dealing with problems like this. The first technique is to avoid the exception by using getNext() to determine the type of the expression waiting.
ml.evaluate("SomeFunction[]");
ml.waitForAnswer();
int result;
int type = ml.getNext();
if (type == MathLink.MLTKINT) {
result = ml.getInteger();
} else {
// What you do here is up to you.
System.out.println("Unexpected result: " + ml.getExpr().toString());
// Throw away the packet contents.
ml.newPacket();
}
A related technique is to read the result as an Expr and examine it using the Expr methods. The Expr class is discussed in "Motivation for the Expr Class".
ml.evaluate("SomeFunction[]");
ml.waitForAnswer();
int result;
Expr e = ml.getExpr();
if (e.integerQ()) {
result = e.asInt();
} else {
// What you do here is up to you.
System.out.println("Unexpected result: " + e.toString());
}
A final technique is to just go ahead and read the expression in the form that you expect, but catch and handle any MathLinkException. (Remember that the entire code fragment that follows must be wrapped in a try/catch block for MathLinkException objects, but you are only seeing an inner try/catch block for MathLinkException objects known to be thrown during the read.)
ml.evaluate("SomeFunction[]");
ml.waitForAnswer();
int result;
try {
result = ml.getInteger();
} catch (MathLinkException e) {
ml.clearError();
System.out.println("Unexpected result: " + ml.getExpr().toString());
ml.newPacket(); // Not strictly necessary because of the getExpr() above
}
Another tip for avoiding bugs in code that reads from a link is to use the newPacket() method liberally. A second very common cause of MathLinkException occurrences is forgetting to read the entire contents of a packet before going on to the next computation. The newPacket() method causes the currently opened packet to be discarded. Another way of saying this is that it throws away all unread parts of the expression that is currently being read. It is a clean-up method that ensures that there are no remnants left over from the last packet when you go on to the next evaluation. Consider the following code.
ml.evaluate("SomeFunction[]");
ml.waitForAnswer();
int result;
try {
result = ml.getInteger();
} catch (MathLinkException e) {
ml.clearError();
System.out.println("Unexpected result");
// Oops. Forgot to call newPacket() to throw away the contents.
}
// Boom. The next line causes a MathLinkException if the previous getInteger()
// call failed, because nextPacket() will be called before the previous packet
// was emptied.
ml.evaluate("AnotherFunction[]");
ml.discardAnswer();
This code will cause a MathLinkException to be thrown at the indicated point if the previous call to getInteger() had failed because the programmer forgot to either finish reading the result or call newPacket(). Here is an even simpler example of this error.
ml.evaluate("SomeFunction[]");
ml.waitForAnswer();
// Oops. Forgot to read or throw away the result.
// Probably meant to call discardAnswer() instead of
// waitForAnswer().
ml.evaluate("AnotherFunction[]");
ml.discardAnswer(); // MathLinkException here!
The "evaluateTo" Methods
J/Link provides another set of convenience methods that hide the packet loop within them. These methods perform the entire procedure of sending something to the Wolfram Language and returning the result in one form or another. They have names that begin with "evaluateTo" to indicate that they actually return the result, rather than merely send it, as with the evaluate() method.FE);
byte[] evaluateToImage(Expr e, int width, int height, int dpi, boolean useFE);
byte[] evaluateToTypeset(String s, int width, boolean useStdForm);
byte[] evaluateToTypeset(Expr e, int width, boolean useStdForm);
Only evaluateToInputForm() and evaluateToOutputForm() are discussed in this section, deferring consideration of evaluateToImage() and evaluateToTypeset() until the section "evaluateToImage() and evaluateToTypeset()", "Graphics and Typeset Output". The evaluateToInputForm() and evaluateToOutputForm() methods encapsulate the very common need of sending some code as a string and getting the result back as a formatted string. They differ only in whether the string is formatted in InputForm or OutputForm. OutputForm is good when you want to display the string to the user, and InputForm is good if you need to send the expression back to the Wolfram Language or if you need to save it to a file or splice it into another expression. These methods take a pageWidth argument to specify how many character widths you want the maximum line length to be. Pass in 0 for a page width of infinity.
The evaluateTo methods do not throw a MathLinkException. Instead, they return null to indicate that a problem occurred. This is not very likely unless there is a serious problem, such as if the kernel has unexpectedly quit. In the event that null is returned from one of these methods, you can call getLastError() to get the Throwable object that represents the exception thrown to cause the unexpected result. Generally, it will be a MathLinkException, but there are some other rare cases (like an OutOfMemoryError if an image was returned that would have been too big to handle).
// Give the (caught) exception that prevented a normal return from the last
// call to an "evaluateTo" method.
Throwable getLastError();
All the evaluateTo methods take the input to evaluate in the form of a string or an Expr. Although a full discussion of the Expr class is deferred until "Motivation for the Expr Class", a brief discussion is provided here on how and why you might want to send the input as an Expr. It is often convenient to specify Wolfram Language input as a string, particularly if it is taken directly from a user, such as the contents of a text field. There are times, though, when it is difficult or unwieldy to work with strings. This is particularly true if the expression to evaluate is built up programmatically, or if it is being read off one link to be written onto the link to the kernel. One way to deal with this circumstance is to forgo the convenience of using, say, evaluateToOutputForm() and instead hand-code the entire operation of sending the input so that the answer will come back formatted in OutputForm. You would have to send the EvaluatePacket head and use the ToString function to get the output as a string.
// This duplicates the following:
// String output = ml.evaluateToOutputForm("Integrate[5 x^n a^x, x]", 0);
// As an expression, we send ToString[Integrate[5 x^n a^x, x], PageWidth->Infinity]
ml.putFunction("EvaluatePacket", 1);
ml.putFunction("ToString", 2);
ml.putFunction("Integrate", 2);
ml.putFunction("Times", 3);
ml.put(5);
ml.putFunction("Power", 2);
ml.putSymbol("x");
ml.putSymbol("n");
ml.putFunction("Power", 2);
ml.putSymbol("a");
ml.putSymbol("x");
ml.putSymbol("x");
ml.putFunction("Rule", 2);
ml.putSymbol("PageWidth");
ml.putSymbol("x");
ml.endPacket();
ml.waitForAnswer();
String output = ml.getString();
This version is considerably more verbose, but most of the code comes from the deliberate decision to send the expression piece-by-piece, not as a single string. There are a few extra lines that ensure that the answer comes back as a properly formatted string and that read the result from the link. It is no great loss to have to do it all by hand. But what if you wanted to do the equivalent with evaluateToTypeset()? Most programmers would have no idea how to perform all the work to get the answer in the desired form. If all the evaluateTo methods took only strings, then J/Link programmers would have to either compose all their input as strings or figure out the difficult steps that are already handled for them by the internals of the various evaluateTo methods.
The solution to this is to allow Expr arguments as an alternative to strings. Although Expr has a set of constructors, the easiest way to create a complicated one is to build the expression on a loopback link and read it off the link as an Expr. You can then pass that Expr to the desired evaluateTo method.
LoopbackLink loop = MathLinkFactory.createLoopbackLink();
// Create the expression EvaluatePacket[Integrate[5 x^n a^x, x]] on the loopback link
loop.putFunction("Integrate", 2);
loop.putFunction("Times", 3);
loop.put(5);
loop.putFunction("Power", 2);
loop.putSymbol("x");
loop.putSymbol("n");
loop.putFunction("Power", 2);
loop.putSymbol("a");
loop.putSymbol("x");
loop.putSymbol("x");
loop.endPacket();
// Now read the Expr off the loopback link
Expr e = loop.getExpr();
// We are done with the loopback link now.
loop.close();
String result = ml.evaluateToOutputForm(e, 0);
e.dispose();
In this way, you can build expressions manually with a series of "put" calls and still have the convenience of using the high-level evaluateTo methods.
Using the PacketListener Interface(), discardAnswer(), or one of the "evaluateTo" methods, which hide the packet loop within them. In some cases, though, programmers will want to observe and/or operate on the incoming flow of packets. A typical example would be to display Print output or messages generated by a computation. These outputs are side effects of a computation and not part of the "answer", and they are normally discarded by J/Link's internal packet loop.
To accommodate this need, KernelLink objects fire a PacketArrivedEvent when the internal packet loop reads a packet (that is, right after nextPacket() has been called). You can register your interest in receiving notifications when packets arrive by creating a class that implements the PacketListener interface and registering it with the KernelLink object. This event notification is done according to the standard Java design pattern for events and event listeners. You create a class that implements PacketListener, and then call the KernelLink method addPacketListener() to register this object to receive notifications.
The PacketListener interface contains only one method, packetArrived().
Your PacketListener object will have its packetArrived() method called for every incoming packet. At the time packetArrived() is called, the packet has been opened with nextPacket(). Your code can begin reading the packet contents. The argument to packetArrived() is a PacketArrivedEvent, from which you can extract the link and the packet type (see the example that follows).
The really nice thing about your packetArrived() implementation is that you can consume or ignore the packet without affecting the internal packet loop in any way. You do not need to be concerned about interfering with any other PacketListener or J/Link's own internal handling of packets. You can read all, some, or none of the contents of any packet.
The packetArrived() method returns a Boolean to indicate whether you want to prevent J/Link's internal code from seeing the packet. This very advanced option lets you completely override J/Link's own handling of packets. At this time, the internals of J/Link's packet handling are undocumented, so programmers will have no use for the override ability. Your packetArrived() method should always return true.
Here is a sample packetArrived() implementation that looks only for TextPacket expressions, printing their contents to the System.out stream.
public boolean packetArrived(PacketArrivedEvent evt) throws MathLinkException {
if (evt.getPktType() == MathLink.TEXTPKT) {
KernelLink ml = (KernelLink) evt.getSource();
System.out.println(ml.getString());
}
return true;
}
This design pattern of using an event listener that gets a callback for every packet received allows your program to be very flexible in its handling of packets. You do not have to significantly change your program to implement different policies, such as ignoring nonresult packets, printing them to the System.out stream, writing them to a file, displaying them in a window, and so on. Just slot in different PacketListener objects with different behavior, and leave all the program logic unchanged. You can use as many PacketListener objects as you want.
The PacketPrinter Class for Debugging
J/Link provides one implementation of the PacketListener interface that is designed to simplify debugging J/Link programs. The PacketPrinter class prints out the contents of each packet on a stream you specify. Here is the constructor.
Here is a code fragment showing a typical use.
PacketListener stdoutPrinter = new PacketPrinter(System.out);
ml.addPacketListener(stdoutPrinter);
...
String result = ml.evaluateToOutputForm("Integrate[x^n a^x, x]", 72);
It is especially useful to see Wolfram System messages that were generated during the computation. Using a PacketPrinter to see exactly what the Wolfram Language is sending back is an extremely useful debugging technique. It is no exaggeration to say that the vast majority of problems with J/Link programs can be identified simply by adding one line of code that creates and installs a PacketPrinter. When you are satisfied that your program is behaving as expected, just take out the addPacketListener() line. No other code changes are required.
Using EnterTextPacket
As noted earlier, when you send something to the Wolfram Language to be evaluated, you wrap it in a packet. The Wolfram Language supports three different packets for sending computations, but the two that are most important are EvaluatePacket and EnterTextPacket. EvaluatePacket has been used manually in a few code fragments, and they are used internally by the evaluate() and evaluateTo methods. When the Wolfram Language receives something wrapped in EvaluatePacket, it evaluates it and sends the result back in a ReturnPacket. Side effects like Print output and PostScript for graphics are sent in their own packets prior to the ReturnPacket. In contrast, when the Wolfram Language receives something in an EnterTextPacket, it runs its full "main loop", which includes, among other things, generating In and Out prompts, applying the $Pre, $PrePrint, and $Post functions, and keeping an input and output history. This is how the notebook front end uses the kernel. You might want to look at the more detailed discussion of the properties of these packets in the MathLink Tutorial, available on MathSource.
If you are using the kernel as a computational engine, you probably want to use EvaluatePacket. Use EnterTextPacket instead when you want to present your users with an interactive "session" where previous outputs can be retrieved by number or %. An example is if you are providing functionality similar to the notebook front end, or the kernel's standalone "terminal" interface. The use of EnterTextPacket as the wrapper packet for computations is not as well supported in J/Link, since it will be used much more rarely. You cannot use the evaluateTo methods, since they use EvaluatePacket.
The packet sequence you get in return from an EnterTextPacket computation will not always have a ReturnTextPacket in it. If the computation returns Null, or if there is a syntax error, no ReturnTextPacket will be sent. The final packet that will always be sent is InputNamePacket, containing the input prompt to use for the next computation. This means that the waitForAnswer() method must accommodate two situations: for most computations, the answer will be in a ReturnTextPacket, but for some computations, there will be no answer at all. Therefore waitForAnswer() returns when either a ReturnTextPacket or an InputNamePacket is encountered. This is why waitForAnswer() returns an int—this is the packet type that caused waitForAnswer() to return. If your call to waitForAnswer() returns MathLink.RETURNTEXTPKT, then you can read the answer (it will be a string), and then you call waitForAnswer() again to receive the InputNamePacket that will come afterward. You can read the prompt string with getString() (it will be something like "In[1]:="). If the original waitForAnswer() returns MathLink.INPUTNAMEPKT, then there was no result to display, and you can just call getString() to read the input prompt string. In the first case, where a ReturnTextPacket does come, instead of calling waitForAnswer() a second time to read off the subsequent InputNamePacket, you could simply call nextPacket(), because the InputNamePacket will always immediately follow the ReturnTextPacket. Although it might look a little weird, calling waitForAnswer() has the advantage of triggering notification of all registered PacketListener objects, which would not happen if you manually read a packet with nextPacket(). In other words, it is better to let all packets be read by J/Link's internal loop.
String inputString = getStringFromUser();
ml.putFunction("EnterTextPacket", 1);
ml.put(inputString);
String result = null;
int pkt = ml.waitForAnswer();
if (pkt == MathLink.RETURNTEXTPKT) {
// Wolfram Language computation returned a non-Null result, so a RETURNTEXTPKT
// was generated. Read its contents (a string).
result = ml.getString();
// Now call waitForAnswer() again, which will return after opening the
// InputNamePacket that will always follow. It is essentially
// nothing more than a call to nextPacket() in this circumstance:
ml.waitForAnswer();
}
// At this point, a call to waitForAnswer() has returned MathLink.INPUTNAMEPKT,
// so we just read out the contents, which is the next input prompt.
String nextPrompt = ml.getString();
You will probably want to use a PacketListener when you are using EnterTextPacket, because you probably want to show your users the full stream of output arriving from the Wolfram Language, which might include messages and Print output. Your PacketListener implementation could write the incoming packets to your input/output session window. In fact, if you have such a PacketListener, you might want to let it handle all output, including the ReturnTextPacket containing the result and the InputNamePacket containing the next prompt. Then you would just call discardAnswer() in your main program and let your PacketListener handle everything.
Handling MathLinkExceptions
Most of the MathLink and KernelLink methods throw a MathLinkException if a WSTP error occurs. This is in contrast to the WSTP C API, where functions return an error code. The methods that do not throw a MathLinkException are generally ones that will often need to be used within a catch block handling a MathLinkException that had already been thrown. If these methods threw their own exceptions, then you would need to nest another try/catch block within the catch block.
A well-formed J/Link program will typically not throw a MathLinkException except in the case of fatal WSTP errors, such as the kernel unexpectedly quitting. What is meant by "well-formed" is that you do not make any overt mistakes when putting or getting expressions, such as specifying an argument count of three in a putFunction() call but only sending two, or calling nextPacket() before you have finished reading the contents of the current packet. The J/Link API helps you avoid such mistakes by providing high-level functions like waitForAnswer() and evaluateToOutputForm() that hide the low-level interaction with the link, but in all but the most trivial J/Link programs it is still possible to make such errors. Just remember that the vast majority of MathLinkException objects thrown represent logic errors in the code of the program, not user errors or runtime anomalies. They are just bugs to which the programmer needs to be alerted so that they can be fixed.
In a small, well-formed J/Link program, you may be able to put a lot of J/Link calls, perhaps even the entire program, within a single try/catch block because there is no need to know exactly what the program was doing when the error occurred—all you are going to do is print a message and exit. The example program in the section "Sample Program" has this structure. Many J/Link programs will need to be a little more refined in their treatment of MathLinkException objects than just quitting. No matter what type of program you are writing, it is strongly recommended that while you are developing the program, you use try/catch blocks in a fine-grained way (that is, only wrapping small, meaningful units of code in each try/catch block), and always put code in your catch block that prints a message or alerts you in some way. Many hours of debugging have been wasted because programmers did not realize a WSTP error had occurred, or they incorrectly identified the region of code where it happened.
Here is a sample of how to handle a MathLinkException in the case where you want to try to recover. The first thing is to call clearError(), as other WSTP calls will fail until the error state is cleared. If clearError() returns false then there is nothing to do but close the link. An example of the type of error that clearError() will fix is the very common mistake of calling nextPacket() before the current packet has been completely read. After clearError() is called, the link is reset to the state it was in before the offending nextPacket(). You can then read the rest of the current packet or call newPacket() to throw it away. Another example of a class of errors where clearError() will work is calling an incorrect "get" method for the type of data waiting on the link—for example, calling getFunction() when an integer is waiting. After calling clearError(), you can read the integer.
try {
...
} catch (MathLinkException e) {
System.err.println(e.toString());
if (ml.clearError() != true) {
System.err.println("MathLinkException was unrecoverable; closing link.");
ml.close();
return; // Or whatever cleanup is appropriate
}
// How you respond after clearError is up to you.
}
What you do in your catch block after calling clearError() will depend on what you were doing when the exception was thrown. About the only useful general guideline provided here is that if you are reading from the link when the exception is thrown, call newPacket() to abandon the rest of the packet. At least then you will know that you are ready to read a fresh packet, even if you have lost the contents of the previous packet.
MathLinkException has a few useful methods that will tell you about the cause of the exception. The getErrCode() method will give you the internal WSTP error code, which can be looked up in the WSTP documentation. It is probably more useful to get the internal message associated with the error, which is given by getMessage(). The toString() method gives you all this information, and will be the most useful output for debugging.
// Some useful MathLinkException methods.
public int getErrCode();
public String getMessage();
public String toString();
public Throwable getCause();
Some MathLinkException exceptions might not be "native" WSTP errors, but rather special exceptions thrown by implementations of the various link interfaces. J/Link follows the standard "exception chaining" idiom by allowing link implementations to catch these exceptions internally, wrap them in a MathLinkException, and re-throw them. As an example, consider a KernelLink implementation built on top of Java Remote Method Invocation (RMI). Methods called via RMI can throw a RemoteException, so such a link implementation might choose to catch internally every RemoteException and wrap it in a MathLinkException. If it did not do this, and instead all its methods could throw a RemoteException in addition to MathLinkException, all client code that used it would have to be modified. What all this means is that if you catch a MathLinkException, it might be "wrapping" another exception, instead of representing an internal WSTP problem. You can use the getCause() method on the MathLinkException instance to retrieve the wrapped exception that was the actual cause of the problem. The getCause() method will return null in the typical case where the MathLinkException is not wrapping another type of exception.
Graphics and Typeset Output
Preamble
Many developers who are writing Java programs that use the Wolfram Language will want to produce Wolfram Language graphics and typeset expressions. This is a relatively complex subject, although J/Link has some very high-level methods designed to make obtaining and displaying these images very simple. If you want to display Wolfram Language images in a Java window, you can use the MathCanvas or MathGraphicsJPanel components, discussed in the next section. If you want a little more control over the process, or if you want to do something with the image data other than display it (like write it to a file or stream), you should read the section on the "evaluateToImage() and evaluateToTypeset()" methods.
MathCanvas and MathGraphicsJPanel
The MathCanvas and MathGraphicsJPanel classes are discussed in "The MathCanvas and MathGraphicsJPanel Classes" because they are often used from Wolfram Language programs. They are just as useful in Java programs. Each is a simple graphical component (a JavaBean, in fact), that can display Wolfram Language graphics and typeset expressions. MathCanvas is a subclass of the AWT Canvas class, and MathGraphicsJPanel is a subclass of the Swing JPanel class. They are conceptually identical and have the same set of extra methods for dealing with Wolfram Language graphics. You use MathCanvas when you want an AWT component and MathGraphicsJPanel when you want a Swing component.
Programmers who want to see how they work are strongly encouraged to examine the source code. The most important methods from these classes are as follows.
public void setMathCommand(String cmd);
public void setImageType(int type);
public void setUsesFE(boolean useFE);
public void setUsesTraditionalForm(boolean useTradForm);
public void setImage(Image im);
public void recompute();
public void repaintNow();
For brevity, the discussion that follows will refer only to MathCanvas; everything said applies equally to MathGraphicsJPanel. Use setMathCommand() to specify arbitrary Wolfram Language code that will be evaluated and have its result displayed. If you are using your MathCanvas to display Wolfram Language graphics, the result of the computation must be a graphics object (that is, an expression with head Graphics, Graphics3D, and so on). It is not enough that the command produces a graphic—it must return a graphic. Thus, setMathCommand("Plot[x,{x,0,1}]") will work, but setMathCommand("Plot[x,{x,0,1}];") will not because the trailing semicolon causes the expression to evaluate to Null. If you are using the MathCanvas to display typeset output, then the result of executing the code supplied in setMathCommand() can be anything. Its typeset form will be displayed. Within the code that you specify via setMathCommand(), quotation marks and other characters that have special meanings inside Java strings must be escaped by preceding them with a backslash, as in setMathCommand("Plot[x,{x,0,1},PlotLabel->\"A Plot\"]").
The setImageType() method is what toggles between displaying a graphic and displaying a typeset expression. Call setImageType(MathCanvas.GRAPHICS) or setImageType(MathCanvas.TYPESET) to toggle between the two modes.
J/Link can create images of Wolfram Language graphics in two ways, either by using only the kernel or by using the kernel along with some extra services from the front end. The front end generally can do a better job, but there are some tradeoffs involved. If you want to use the front end, call setUsesFE(true). When you call setUsesFE(true), the front end may be launched, or an already running copy may be used. The exact behavior depends on what operating system and version of the Wolfram Language you have. In Mathematica 6.0 and higher, all graphics output requires the front end, so the setUsesFE() method has no effect—it is always true.
For typeset output, the default is StandardForm. To change to TraditionalForm call setUsesTraditionalForm(true). When generating typeset output (that is, if you have called setImageType(MathCanvas.TYPESET)), the front end is always involved in generating typeset output, so make sure you understand the issues discussed in "Using the Front End as a Service".
When you call setMathCommand(), the command is executed immediately and the resulting image is cached and used every time the window is repainted. Sometimes the code in your math command depends on variables that will change. To force the command to be recomputed and the new image displayed, call recompute().
The repaintNow() method is like a "smart" version of the JComponent method paintImmediately(), and you use it in the same circumstances as paintImmediately(). It knows about the image that needs to be drawn and it will block until all the pixels are ready. You can use this method to force an immediate redraw when you want the image to be updated instantly in response to some user action like dragging a slider that controls a variable upon which the plot depends. If you call the standard method repaint() instead, Java might not get around to repainting the image until many frames have gone by, and the plot will appear to jump from one value to another, rather than being redrawn for every change in the variable's value.
The preceding discussion described how you can easily display Wolfram Language output in a MathCanvas simply by supplying some code to setMathCommand(). Another way to get an image displayed in a MathCanvas is to create a Java Image object yourself and call the setImage() method. You might want to do this if your image is a bitmap created with some Wolfram Language data, or if you have drawn into an offscreen image using the Java graphics API. The setImage() method was created mainly for use from Wolfram Language code, and it is somewhat less important for Java programmers because you already have other ways to draw into your own components. It can still be useful in Java programs, though, since it can save you from having to write your own subclass of an AWT component just to override its paint() method, which is the usual technique for drawing your own content in components. When used with the setImage() method, a MathCanvas is really just a useful AWT component—it has nothing directly to do with the Wolfram Language.
The next section presents a sample program that uses a MathCanvas to display graphics and typeset output.
A Sample Program That Displays Graphics and Typeset Results
Here is the code for a simple program that presents a window that displays Wolfram Language graphics and typeset output. It is an example of how to use the MathCanvas class. The code and compiled class files for this program are available in the JLink/Examples/Part2/GraphicsApp directory. Launch the program with the pathname to the kernel executable as an argument (note the use of the quote marks " and '):
(Windows)
java -classpath GraphicsApp.jar;..\..\..\JLink.jar GraphicsApp "c:\program files\wolfram research\mathematica\10.0\mathkernel"
(Linux)
java -classpath GraphicsApp.jar:../../../JLink.jar GraphicsApp 'math -mathlink'
(Mac OSX command line)
java -classpath GraphicsApp.jar:../../../JLink.jar GraphicsApp '"/Applications/Mathematica.app/Contents/MacOS/MathKernel" -mathlink'
import com.wolfram.jlink.*;
import java.awt.*;
import java.awt.event.*;
public class GraphicsApp extends Frame {
static GraphicsApp app;
static KernelLink ml;
MathCanvas mathCanvas;
TextArea inputTextArea;
Button evalButton;
Checkbox useFEButton;
Checkbox graphicsButton;
Checkbox typesetButton;
public static void main(String[] argv) {
try {
String[] mlArgs = {"-linkmode", "launch", "-linkname", argv[0]};
ml = MathLinkFactory.createKernelLink(mlArgs);
ml.discardAnswer();
} catch (MathLinkException e) {
System.out.println("An error occurred connecting to the kernel.");
if (ml != null)
ml.close();
return;
}
app = new GraphicsApp();
}
public GraphicsApp() {
setLayout(null);
setTitle("Graphics App");
mathCanvas = new MathCanvas(ml);
add(mathCanvas);
mathCanvas.setBackground(Color.white);
inputTextArea = new TextArea("", 2, 40, TextArea.SCROLLBARS_VERTICAL_ONLY);
add(inputTextArea);
evalButton = new Button("Evaluate");
add(evalButton);
evalButton.addActionListener(new BnAdptr());
useFEButton = new Checkbox("Use front end", false);
CheckboxGroup cg = new CheckboxGroup();
graphicsButton = new Checkbox("Show graphics output", true, cg);
typesetButton = new Checkbox("Show typeset result", false, cg);
add(useFEButton);
add(graphicsButton);
add(typesetButton);
setSize(300, 400);
setLocation(100,100);
mathCanvas.setBounds(10, 25, 280, 240);
inputTextArea.setBounds(10, 270, 210, 60);
evalButton.setBounds(230, 290, 60, 30);
graphicsButton.setBounds(20, 340, 160, 20);
typesetButton.setBounds(20, 365, 160, 20);
useFEButton.setBounds(180, 340, 100, 20);
addWindowListener(new WnAdptr());
setBackground(Color.lightGray);
setResizable(false);
// Although this code would automatically be called in
// evaluateToImage or evaluateToTypeset, it can cause the
// front end window to come in front of this Java window.
// Thus, it is best to get it out of the way at the start
// and call toFront to put this window back in front.
// KernelLink.PACKAGE_CONTEXT is just "JLink`", but it is
// preferable to use this symbolic constant instead of
// hard-coding the package context.
ml.evaluateToInputForm("Needs[\"" + KernelLink.PACKAGE_CONTEXT + "\"]", 0);
ml.evaluateToInputForm("ConnectToFrontEnd[]", 0);
setVisible(true);
toFront();
}
class BnAdptr implements ActionListener {
public void actionPerformed(ActionEvent e) {
mathCanvas.setImageType(
graphicsButton.getState() ? MathCanvas.GRAPHICS : MathCanvas.TYPESET);
mathCanvas.setUsesFE(useFEButton.getState());
mathCanvas.setMathCommand(inputTextArea.getText());
}
}
class WnAdptr extends WindowAdapter {
public void windowClosing(WindowEvent event) {
if (ml != null) {
// Because we used the front end, it is important
// to call CloseFrontEnd[] before closing the link.
// Counterintuitively, this is not because we want
// to force the front end to quit, but because we
// _don't_ want to do this if the user has begun
// working in the front end session we started.
// CloseFrontEnd knows how to politely disengage
// from the front end if necessary. The need for
// this will go away in future releases of
// Mathematica.
ml.evaluateToInputForm("CloseFrontEnd[]", 0);
ml.close();
}
dispose();
System.exit(0);
}
}
}
evaluateToImage() and evaluateToTypeset()
If the MathCanvas or MathGraphicsJPanel classes described in the preceding two sections are not suitable for your needs, you can manually produce images of Wolfram Language graphics and typeset expressions using the evaluateToImage() and evaluateToTypeset() methods in the KernelLink interface.
There are multiple signatures for each. For evaluateToImage(), one set takes a simpler argument list and uses default values for the less commonly used arguments. Here are graphics and typesetting methods from the KernelLink interface
The evaluateToImage() method takes the input as a string or Expr, and a width and height of the resulting graphic in pixels. The extended versions let you specify a dots-per-inch value, and whether to use the notebook front end or not (as discussed later). The short versions use the values of 0 for the dpi and false for whether to use the front end. Specifying 0 for dpi causes the Wolfram Language to use its default value. The image will be sized to fit within a box of width
height, without changing its aspect ratio. In other words, the image might not have exactly these dimensions, but it will never be larger in either dimension and it will never be stretched in one dimension to make it fit better. Pass 0 for the width and height to get their Automatic values. If the input does not evaluate to a graphics expression, then null is returned. It is not enough that the computation causes a plot to be generated—the return value of the computation must have head Graphics (or Graphics3D, etc.). If the useFrontEnd argument is true, evaluateToImage() will launch the notebook front end if it is not already running. Note that the useFrontEnd argument is irrelevant when using Mathematica 5.1 and higher—the front end is always used for graphics.
The evaluateToTypeset() method takes the input as a string or Expr, a page width to wrap the output to before it is typeset, and a flag specifying whether to use StandardForm or TraditionalForm. The units for the page width are pixels (use 0 for a page width of infinity). The evaluateToTypeset() method requires the services of the notebook front end, which will be launched if it is not already running.
The result of both of these methods is a byte array of GIF data. The GIF format is well suited to most Wolfram Language graphics, but for some 3D graphics the color usage is not ideal. If you want to change to using JPEG format, you can set $DefaultImageFormat to "JPEG" in the kernel.
// Specifies JPEG format for subsequent calls to evaluateToImage()
// and evaluateToTypeset().
ml.evaluateToOutputForm("$DefaultImageFormat = \"JPEG\"", 0);
These methods are like evaluateToInputForm() and evaluateToOutputForm() in that they perform the computation and return the result in a single step. Together, all these methods are referred to as the "evaluateTo" methods. They all return null in the unlikely event that a MathLinkException occurred.
The MathCanvas and MathGraphicsJPanel classes use these methods internally, so their source code is a good place to look for examples of calling the methods. The MathCanvas code demonstrates how to take the byte array of GIF or JPEG data and turn it into a Java Image for display.
The following Typesetter sample program is another example. It takes a Wolfram Language expression supplied on the command line, calls evaluateToTypeset(), and writes the image data out to a GIF file. You would invoke it from the command line like this.
(Windows)
java Typesetter "c:\program files\wolfram research\mathematica\10.0\mathkernel" "Sqrt[z]" test.gif
(Linux)
java Typesetter 'math -mathlink' "Sqrt[z]" test.gif
(Mac OSX command line)
java Typesetter '"/Applications/Mathematica.app/Contents/MacOS/MathKernel" -mathlink' "Sqrt[z]" test.gif
The first argument is the command line to launch the Wolfram Language kernel, the second argument is the expression to typeset, and the third argument is the file name to create. This program is not intended to be particularly useful—it is just a simple demonstration.
import com.wolfram.jlink.*;
import java.io.*;
public class Typesetter {
public static void main(String[] argv) throws MathLinkException {
KernelLink ml;
try {
String[] mlArgs = {"-linkmode", "launch", "-linkname", argv[0]};
ml = MathLinkFactory.createKernelLink(mlArgs);
ml.discardAnswer();
} catch (MathLinkException e) {
System.err.println("FATAL ERROR: link creation failed.");
return;
}
byte[] gifData = ml.evaluateToTypeset(argv[1], 0, false);
try {
FileOutputStream s = new FileOutputStream(new File(argv[2]));
s.write(gifData);
s.close();
} catch (IOException e) {}
// ALWAYS execute CloseFrontEnd[] before killing the kernel if you used
// evaluateToTypeset(), or evaluateToImage() with the useFE parameter
// set to true:
ml.evaluateToOutputForm("CloseFrontEnd[]", 0);
ml.close();
}
}
It is very important to note that you execute CloseFrontEnd[] before closing the link to the kernel. This is essential to prevent the front end from quitting in circumstances where it should not—specifically, if an already-running copy was used and the user has open documents.
Aborting and Interrupting Computations
J/Link provides two ways in which you can interrupt or abort computations. The first technique uses the low-level putMessage() function to send the desired WSTP message. The second and preferred technique is to use a new set of KernelLink methods introduced in J/Link 2.0. These are listed as follows.
The abortEvaluation() method will send an abort request to the Wolfram Language, identical to what happens in the notebook front end when you select Evaluation ▶ Abort Evaluation. The Wolfram Language responds to this command by terminating the current evaluation and returning the symbol $Aborted. Be aware that sometimes the kernel is in a state where it cannot respond immediately to interrupts or aborts.
The interruptEvaluation() method will send an interrupt request to the Wolfram Language. The Wolfram Language responds to this command by interrupting the current evaluation and sending back a special packet that contains choices for what to do next. The choices can depend on what the kernel is doing at the moment, but in most cases they include aborting, continuing, or entering a dialog. It is not likely that you will want to have to deal with this list of choices on your own, so you might choose instead to call abortEvaluation() and just stop the computation. If you are developing an interactive front end, however, you might decide that you want your users to see the same types of choices that the notebook front end provides. If this is the case, then you can use the new InterruptDialog class, which is discussed in a later section.
The abandonEvaluation() method does exactly what its name suggests—it causes any command that is currently waiting for something to arrive on the link to back out immediately and throw a MathLinkException. This MathLinkException is recoverable (meaning that clearError() will return true), so in theory you could call waitForAnswer() again later and get the result when it arrives. In practice, however, you should generally not use this method unless you plan to close the link. You should think of abandonEvaluation() method as an "emergency exit" function that lets your program back out of waiting for a result no matter what state the kernel is in. Remember that the abortEvaluation() method simply sends an abort request to the Wolfram Language, and thus it requires some cooperation from the kernel; there is no guarantee that the current evaluation will abort in a timely manner, if ever. If you call close() right after abandonEvaluation(), the kernel will typically not die, because it is still busy with a computation. You should call terminateKernel() before close() to ensure that the kernel shuts down. A code fragment that follows demonstrates this.
The terminateKernel() method will send a terminate request to the Wolfram Language. It does this by sending the low-level WSTP message WSTERMINATEMESSAGE. This is the strongest step you can take to tell the kernel to shut down, short of killing the kernel process with operating system commands. In "normal" operation of the kernel, when you call close() on the link, the kernel will quit. In some cases, however, generally only if the kernel is currently busy computing, it will not quit. In such cases you can generally force the kernel to quit immediately by calling terminateKernel(). You should always call close() immediately afterward. In a server environment, where a Java program that starts and stops Wolfram Language kernels needs to run unattended for a very long period of time with the highest reliability possible, you might consider always calling terminateKernel() before close(), if there is any chance that close() needs to be called while the kernel is still busy. In some rare circumstances (generally only if something has gone wrong with the Wolfram Language), even calling terminateKernel() will not force the kernel to quit, and you might need to use facilities of your operating system (perhaps invoked via Java's Runtime.exec() method) to kill the kernel process.
If you want to be able to abort, interrupt, or abandon computations, your program will need to have at least two threads. The thread on which the computation is called will probably look like all the sample programs you have seen. You would call one of the "evaluateTo" methods, or perhaps evaluate() followed by waitForAnswer(). This thread will block, waiting for the result. On a separate thread, such as the user interface thread, you could periodically check for some event, like a time-out period elapsing. Or, you could use an event listener to be notified when the Esc key was pressed. Whichever way you want to detect the abort request, all you need to do is call putMessage(MathLink.MLABORTMESSAGE). If the kernel receives the message before it finishes, and it is doing something that can be aborted, the computation will end and return the symbol $Aborted. You typically will not need to do anything special in the computation thread. You wait for the answer as usual; it might come back as $Aborted instead of the final result, that is all. Here are some typical code fragments that demonstrate aborting a computation.
// On thread 1
ml.evaluate("Do[2+2, {20000000}]");
ml.waitForAnswer();
// If user aborted, the result will be the symbol $Aborted.
// On thread 2
if (userPressedEscKey() || timeoutElapsed())
ml.abortEvaluation();
Here is some code that demonstrates how to abandon a computation and force an immediate shutdown of the kernel.
// On thread 1
try {
ml.evaluate("While[True]");
ml.discardAnswer();
} catch (MathLinkException e) {
// We will get here when abandonEvaluation() is called on the other thread.
System.err.println("MathLinkException occurred: " + e.toString());
if (!ml.clearError()) {
// clearError() will always fail when abandonEvaluation() was called.
ml.terminateKernel();
ml.close();
}
}
// On thread 2
if (timeoutElapsedAndReallyNeedToShutdownKernel())
ml.abandonEvaluation();
The discussion so far has focused on the high-level interface for interrupting and aborting computations. The alternative is to use the low-level method putMessage() and pass one of the constants MathLink.MLINTERRUPTMESSAGE, MathLink.MLABORTMESSAGE, or MathLink.MLTERMINATEMESSAGE. There is no reason to do this, however, as interruptEvaluation(), abortEvaluation(), and terminateKernel() are just one-line methods that put the appropriate message. The "messages" referred to in the MathLink method putMessage() are not related to the familiar Wolfram System error and warning messages. Instead, they are a special type of communication between two WSTP programs. This communication takes place on a different channel from the normal flow of expressions, which is why you can call putMessage() while the kernel is in the middle of a computation and not reading from the link.
There are several other MathLink methods with "message" in their names. These are messageReady(), getMessage(), addMessageHandler(), and removeMessageHandler(). These methods are only useful if you want to be able to detect messages the kernel sends to you. J/Link programmers will rarely want to do this, so these methods are not discussed in detail. Please note that messageReady() and getMessage() no longer function in J/Link 2.0 and higher. If you want to be able to receive messages from the Wolfram System, you must use addMessageHandler() and removeMessageHandler(). There is more information in the JavaDocs for these methods.
Using Marks
WSTP allows you to set a "mark" in a link, so that you can read more data and then seek back to the mark, restoring the link to the state it was in before you read the data. Thus, marks let you read data off a link and not have the data consumed, so you can read it again later. There are three mark-related methods in the MathLink interface.
// In the MathLink interface:
long createMark() throws MathLinkException;
void seekMark(long mark);
void destroyMark(long mark);
One common reason to use a mark is if you want to examine an incoming expression and branch to different code depending on some property of the expression. You want the code that actually handles the expression to see the entire expression, but you will need to read at least a little bit of the expression to decide how it must be handled (perhaps just calling getFunction() to see the head). Here is a code fragment demonstrating this technique.
String head = null;
long mark = ml.createMark();
try {
head = ml.getFunction().name;
ml.seekMark(mark);
} finally {
ml.destroyMark(mark);
}
if (head.equals("foo"))
handleFoo(ml);
else if (head.equals("bar"))
handleBar(ml);
Because you seek back to the mark after calling getFunction(), the link will be reset to the beginning of the expression when the handleFoo() and handleBar() methods are entered. Note the use of a try/finally block to ensure that the mark is always destroyed, whether or not an exception of any kind is thrown after it is created. You should always use marks in this way. Right after calling createMark(), start a try block whose finally clause calls destroyMark(). It is important that no other code intervenes between createMark() and the try block, especially WSTP calls (which can throw MathLinkException). If a mark is created and not destroyed, a memory leak will result because incoming data will pile up on the link, never to be freed.
Another common use for marks is to allow you to read an expression one way, and if a MathLinkException is thrown, go back and try reading it a different way. For example, you might be expecting a list of real numbers to be waiting on the link. You can set a mark and then call getDoubleArray1(). If the data on the link cannot be coerced to a list of reals, getDoubleArray1() will throw a MathLinkException. You can then seek back to the mark and try a different method of reading the data.
double[] data = null;
long mark = ml.createMark();
ty {
data = ml.getDoubleArray1();
} catch (MathLinkException e) {
ml.clearError();
ml.seekMark(mark);
// Here, try a different way of reading the data:
switch (ml.getNext()) {
...
}
} finally {
ml.destroyMark(mark);
}
Much of the functionality of marks is subsumed by the Expr class, described in "Motivation for the Expr Class". Expr objects allow you to easily examine an expression over and over in different ways, and with the peekExpr() method you can look at the upcoming expression without consuming it off the link.
Using Loopback Links
In addition to the MathLink and KernelLink interfaces, there is one other link interface: LoopbackLink. Loopback links are a feature of WSTP that allow a program to conveniently store Wolfram Language expressions. Say you want to read an expression off a link, keep it around for awhile, and then write it back onto the same or a different link. How would you do this? If you read it with the standard reading functions (getFunction(), getInteger(), and so on), you will have broken the expression down into its atomic components, of which there might be very many. Then you will have to reconstruct it later with the corresponding series of "put" methods. What you really need is a temporary place to transfer the expression in its entirety, where it can be read later or transferred again to a different link. A loopback link serves this purpose.
Before proceeding to examine loopback links, please note that J/Link's Expr class is used for the same sorts of things that a loopback link is used for. Expr objects use loopback links internally, and are a much richer extension of the functionality that loopback links provide. You should consider using Expr objects instead of loopback links in your programs.
If a MathLink is like a pipe, then a loopback link is a pipe that bends around to point back at you. You manage both ends of the link, writing into one "end" and reading out the other, in FIFO order. To create a loopback link in J/Link, use the MathLinkFactory method createLoopbackLink().
// In class MathLinkFactory:
public static LoopbackLink createLoopbackLink() throws MathLinkException;
The LoopbackLink interface extends the MathLink interface, so all the MathLink methods can be used on loopback links. LoopbackLink adds no methods beyond those in the MathLink interface. Why have a separate interface then? It can be useful to have a separate type for this kind of link, because it has different behavior than a normal one-sided WSTP link. Furthermore, there is one method in the MathLink interface (transferToEndOfLoopbackLink()) that requires, as an argument, a loopback link. Thus, it provides a small measure of type safety within J/Link and your own programs to have a separate LoopbackLink type.
You will probably use the MathLink method transferExpression(), or its variant transferToEndOfLoopbackLink(), in conjunction with loopback links. You will need transferExpression() either to move an expression from another link onto a loopback link or to move an expression you have manually placed on a loopback link onto another link. Here are the declarations of these two methods.
// In the MathLink interface
void transferExpression(MathLink source) throws MathLinkException;
void transferToEndOfLoopbackLink(LoopbackLink source) throws MathLinkException;
Note that the source link is the argument and the destination is the this link. The transferExpression() method reads one expression from the source link and puts it on the destination link, and the transferToEndOfLoopbackLink() method moves all the expressions on the source link (which must be a LoopbackLink) to the destination link.
As already mentioned, a common case where loopback links are convenient is in the temporary storage of an expression for later writing to a different link. This is done more simply using an Expr object, however ("Motivation for the Expr Class"). Another use for loopback links is to allow you to begin sending an expression before you know how long it will be. Recall that the putFunction() method requires you to specify the number of arguments (i.e., the length). There are times, though, when you do not know ahead of time how long the expression will be. Consider the following code fragment. You need to send a list of random numbers to the Wolfram Language, the length of which depends on a test whose outcome cannot be known at compile time. You can create a loopback link and push the numbers onto it as they are generated, counting them as you go. When the loop finishes, you know how many were generated, so you call putFunction() and then just "pour out" the contents of the loopback link onto the destination link. In this example, it would be easy to store the accumulating numbers in a Java array or Vector rather than a loopback link. But if you were sending complicated expressions it might not be so easy to store them in native Java structures. It is often easier just to write them on a link as you go, and leave the storage issues up to the internals of WSTP.
// Here we demonstrate sending an expression (a list of reals)
// whose length is unknown at the start.
try {
...
LoopbackLink loop = MathLinkFactory.createLoopbackLink();
int count = 0;
while (someTest) {
loop.put(Math.random());
count++;
}
ml.putFunction("List", count);
ml.transferToEndOfLoopbackLink(loop);
loop.close();
...
} catch (MathLinkException e) {}
Using Expr Objects
Motivation for the Expr Class
The Expr class provides a direct representation of Wolfram Language expressions in Java. You can guess that this will be useful, since everything in the Wolfram Language is an expression and WSTP is all about communicating Wolfram Language expressions between programs.
You have several ways of handling Wolfram Language expressions in a WSTP program. First, you can send and/or receive them as strings. This is often convenient, particularly if you are taking input typed by a user, or displaying results to the user. Many of the KernelLink methods can take input as a string and return the result as a string. A second way of handling Wolfram Language expressions is to put them on the link or read them off the link a piece at a time with a series of "put" or "get" calls. A third way is to store them on a loopback link and shuttle them around between links. Each of these methods has advantages and disadvantages.
Loopback links were described in the previous section, but it is worthwhile to summarize them here, as it provides some of the background to understanding the motivation for the Expr class. Basically, a loopback link provides a means to store a Wolfram Language expression without having tediously to read it off the link, disassembling it into its component atoms in the process. Loopback links, then, let you store expressions for later reading or just dumping onto another link. If you eventually want to read and examine the expression, however, you are still stuck with the difficult task of dissecting an arbitrary expression off a link with the correct sequence of "get" calls. This is where the Expr class comes in. Like a loopback link, an Expr object stores an arbitrary Wolfram Language expression. The Expr class goes further, though, and provides a set of methods for examining the structure of the expression, extracting parts of it, and building new ones. The names and operation of these methods will be familiar to Wolfram Language programmers: head(), length(), dimensions(), part(), stringQ(), vectorQ(), matrixQ(), insert(), delete(), and many others.
The advantage of an Expr over a loopback link, then, is that you are not restricted to using the low-level MathLink interface for examining an expression. Consider the task of receiving an arbitrary expression from the Wolfram Language and determining if its element at position [[2, 3]] (in Wolfram Language notation) is a vector (a list with no sublists). This can be done with an Expr object as follows.
ml.evaluate("some code");
ml.waitForAnswer();
Expr e = ml.getExpr();
Expr part23 = e.part(new int[] {2, 3});
boolean isVector = part23.vectorQ();
This task would be much more difficult with the MathLink interface. The Expr class provides a minimal Wolfram Language-like functional interface for examining and dissecting expressions.
Methods in the MathLink Interface for Reading and Writing Exprs
There are three methods in the MathLink interface for dealing with Expr objects. This is in addition to the numerous methods in the Expr class itself, which deal with composing and decomposing Expr objects. The getExpr() and peekExpr() methods read an expression off a link, but peekExpr() resets the link to the beginning of the expression—it "peeks" ahead at the upcoming expression without consuming it. This is quite useful for debugging. The put() method will send an Expr object as its corresponding Wolfram Language expression.
// In the MathLink interface:
Expr getExpr() throws MathLinkException;
Expr peekExpr() throws MathLinkException;
void put(Object obj) throws MathLinkException;
Exprs as Replacements for Loopback Links
One way to use Expr is as a simple replacement for a loopback link. You can use the MathLink method getExpr() to read any type of expression off a link and store it in the resulting Expr object. To write the expression onto a link, use the put() method. Compare the following two code fragments.
// Old way, using a loopback link
LoopbackLink loop = MathLinkFactory.createLoopbackLink();
// Read expr off of link and store it on loopback
loop.transferExpression(ml);
...
// Later, write the expr back on the link
ml.transferExpression(loop);
loop.close();
// New way, using an Expr
Expr e = ml.getExpr();
...
// Later, write the expression back on the link
ml.put(e);
e.dispose();
Note the call to dispose() at the end. The dispose() method tells an Expr object to release certain resources that it might be using internally. You should generally use dispose() on an Expr when you are finished with it. The dispose() method is discussed in more detail in "Disposing of Exprs".
Exprs as a Means to Get String Representations of Expressions
A particularly useful method in the Expr class is toString(), which produces a string representation of the expression similar to InputForm (without involving the kernel, of course). This is particularly handy for debugging purposes, when you want a quick way to see what is arriving on the link. In "The PacketPrinter Class for Debugging" it was mentioned that J/Link has a class PacketPrinter that implements the PacketListener interface and can be used easily to print out the contents of packets as they arrive in your program, without modifying your program. Following is the packetArrived() method of that class, which uses an Expr object and its toString() method to get the printable text representation of an arbitrary expression.
public boolean packetArrived(PacketArrivedEvent evt) throws MathLinkException {
KernelLink ml = (KernelLink) evt.getSource();
Expr e = ml.getExpr();
strm.println("Packet type was " + pktToName(evt.getPktType()) +
". Contents follows.");
strm.println(e.toString());
e.dispose();
return true;
}
Whether you use the PacketPrinter class or not, this technique is useful to see what expressions are being passed around. This is often used in conjunction with the MathLink peekExpr() method, which reads an expression off the link, but then resets the link so that the expression is not consumed. In this way, you can look at expressions arriving on links without interfering with the rest of the link-reading code in your program. The PacketPrinter code shown does not use peekExpr(), but it has the same effect since the resetting of the link is handled elsewhere.
Exprs as Arguments to KernelLink Methods
The KernelLink methods evaluate(), evaluateToInputForm(), evaluateToOutputForm(), evaluateToImage(), and evaluateToTypeset() take the Wolfram Language expression to evaluate as either a string or an Expr. "The 'evaluateTo' Methods" discusses why and how you would use an Expr object to provide the input instead of a string. This examines one trivial example comparing how you would send 2+2 to the Wolfram Language as both a string and as an Expr. In the Expr case you build the expression on a loopback link and then read the Expr off this link. For all but the simplest expressions, this is generally easier than trying to use the Expr constructors.
// Send input as a string:
String result = MathLink.evaluateToOutputForm("2+2", 0);
// Send input as an Expr:
LoopbackLink loop = MathLinkFactory.createLoopbackLink();
// Create the expression 2+2 on the loopback link
loop.putFunction("Plus", 2);
loop.put(2);
loop.put(2);
loop.endPacket();
// Now read the Expr off the loopback link
Expr e = loop.getExpr();
// We are done with the loopback link now.
loop.close();
String result = ml.evaluateToOutputForm(e, 0);
e.dispose();
Examining and Manipulating Exprs
Like expressions in the Wolfram Language, Expr objects are immutable, meaning that they cannot be modified once they have been created. Operations that might appear to modify an Expr, like the insert() method, actually copy the original, modify this copy, and then return a new immutable object. One consequence of being immutable is that the Expr class is thread-safe—multiple threads can operate on the same Expr without worrying about synchronization.
The Expr class provides a minimal Wolfram Language-like API for examination and manipulation. The functions are generally named after their Wolfram Language counterparts, and they operate in the same way. This section will only provide a brief review of the Expr API. Consult the JavaDocs (found in the JLink/Documentation/JavaDoc directory) for more information about these methods.
Here are some methods for learning about the structure of an Expr.
There are a number of methods whose names end in "Q", following the same naming pattern as in the Wolfram Language for functions that return true or false. This is not the complete list.
// A sampling of the "Q" methods
public boolean atomQ();
public boolean stringQ();
public boolean integerQ();
public boolean numberQ();
public boolean trueQ();
public boolean listQ();
public boolean vectorQ();
public boolean matrixQ();
There are several methods for taking apart and building up an Expr. Like in the Wolfram Language, part numbers and indices are 1-based. You can also supply negative numbers to count backward from the end. Many Expr methods throw an IllegalArgumentException if they are called with invalid input, such as a part index larger than the length of the Expr. These exceptions parallel the Wolfram System error messages you would get if you made the same error in Wolfram Language code.
public Expr part(int index);
public Expr part(int[] indices);
public Expr take(int n);
public Expr delete(int n);
public Expr insert(Expr e, int n);
Here is some very simple code that demonstrates a few Expr operations.
ml.evaluate("Expand[(x + y)^4]");
ml.waitForAnswer();
Expr e1 = ml.getExpr();
System.out.println("e1 is: " + e1.toString());
System.out.println("the length is: " + e1.length());
System.out.println("the head is: " + e1.head().toString());
System.out.println("part [[2]] is: " + e1.part(2));
System.out.println("part [[-1]] is: " + e1.part(-1));
System.out.println("part [[2, 2]] is: " + e1.part(new int[]{2, 2}));
System.out.println("drop the last element: " + e1.delete(-1).toString());
System.out.println("e1 is unmodified: " + e1.toString());
Expr e2 = e1.insert(new Expr(new double[] {1.0, 2.0, 3.0}), 1);
System.out.println("e2 is: " + e2.toString());
That code prints the following.
e1 is: Plus[Power[x,3],Times[3,Power[x,2],y],Times[3,x,Power[y,2]],Power[y,3]]
the length is: 4
the head is: Plus
part [[2]] is: Times[3,Power[x,2],y]
part [[-1]] is: Power[y,3]
part [[2, 2]] is: Power[x,2]
drop the last element: Plus[Power[x,3],Times[3,Power[x,2],y],Times[3,x,Power[y,2]]]
e1 is unmodified: Plus[Power[x,3],Times[3,Power[x,2],y],Times[3,x,Power[y,2]],Power[y,3]]
e2 is: Plus[{1.0,2.0,3.0},Power[x,3],Times[3,Power[x,2],y],Times[3,x,Power[y,2]],Power[y,3]]
Disposing of Exprs
You have seen the dispose() method used frequently in this discussion of the Expr class. An Expr object might make use of a loopback link internally, and any time a Java class holds such a non-Java resource it is important to provide programmers with a dispose() method that causes the resource to be released. Although the finalizer for the Expr class will call dispose(), you cannot rely on the finalizer ever being called. Although it is good style to always call dispose() on an Expr when you are finished using it, you should know that many of the operations you can perform on an Expr will cause it to be "unwound" off its internal loopback link and cause that link to be closed. After this happens, the dispose() method is unnecessary. Calling the toString() method is an example of an operation that makes dispose() unnecessary, and in fact virtually any operation that queries the structure of an Expr or extracts a part will have this effect. This is useful to know since it allows shorthand code like the following.
This replaces the more verbose code following.
You should get in the habit of calling dispose() explicitly on Expr objects. In cases where it is inconvenient to store an Expr in a named variable, and you know that the Expr does not need to be disposed, then you can skip calling it.
Because extracting any piece of an existing expression will make dispose() unnecessary, you do not have to worry about calling dispose() on Expr objects that are obtained as parts of another expression.
Expr e = ml.getExpr();
// The moment that head() or part() are called on e below, you know that neither
// e, e2, nor e3 need to be disposed.
Expr e2 = e.head();
Expr e3 = e.part(1);
You cannot reliably use an Expr object after dispose() has been called on it. You have already seen that dispose() is often unnecessary because many Expr objects have already had their internal loopback links closed. For such an Expr, dispose() will have no effect at all and there would be no problem continuing to use the Expr after dispose() had been called. That being said, it is horrible style to ever try to use an Expr after calling dispose(). A call to dispose() should always be an unambiguous indicator that you have no further use for the given Expr or any part of it.
Threads, Blocking, and Yielding
The classes that implement the MathLink and KernelLink interfaces are not thread-safe. This means that if you write a J/Link program in which one link object is used by more than one thread, you need to pay careful attention to concurrency issues. The relevant methods in the link implementation classes are synchronized, so at the individual method level there is no chance that two threads can try to use the link at the same time. However, this is not enough to guarantee thread safety, because interactions with the link typically involve an entire transaction, encompassing a multistep write and read of the result. This entire transaction must be guarded. This is done by using synchronized blocks to ensure that the threads do not interfere with each other's use of the link.
The "evaluateTo" methods are synchronized, and they encapsulate an entire transaction within one call, so if you use only these methods you will have no concerns. On the other hand, if you use evaluate() and waitForAnswer(), or any other technique that splits up a single transaction across multiple method calls, you should wrap the transaction in a synchronized block, as follows.
Synchronization is only an issue if you have multiple threads using the same link.
J/Link functions that read from a link will block until data arrives on that link. For example, when you call evaluateToOutputForm(), it will not return until the Wolfram Language has computed and returned the result. This might be a problem if the thread on which evaluateToOutputForm() was called needs to stay active—for example, if it is the AWT thread, which processes user interface events.
How to handle blocking is a general programming problem, and there are a number of solutions. The Java environment is multithreaded, and thus an obvious solution is simply to make J/Link calls on a thread that does not need to be continuously responsive to other events in the system.
WSTP has the notion of a "yield function", which is a function you can designate to be called from the internals of WSTP while WSTP is blocking, waiting for input to arrive from the other side. A primary use for yield functions was to solve the blocking problem on operating systems that did not have threads, or for programming languages that did not have portable threading libraries. The way this would typically work is that your single-threaded program would install a yield function that ran its main event loop, so that the program could process user interface events even while it was waiting for WSTP data.
With Java, this motivation goes away. Rather than using a yield function to allow your program's only thread to still handle events while blocking, you simply start a separate thread from the user interface thread and let it happily block inside J/Link calls. Despite the limited usefulness of yield functions in Java programs, J/Link provides the ability to use them anyway.
// From the MathLink interface
public boolean setYieldFunction(Class cls, Object obj, String methName);
The setYieldFunction() method in the MathLink interface takes three arguments that identify the function to be called. These arguments are designed to accommodate static and nonstatic methods, so only two of the three need to be specified. For a static method, supply the method's Class and its name, leaving the second argument null. For a nonstatic method, supply the object on which you want the method called and the method's name, leaving the Class argument null. The function must be public, take no arguments, and return a boolean.
The function you specify will be called periodically while J/Link is blocking in a call that tries to read from the link. The return value is used to indicate whether J/Link should back out of the read call and return right away. Backing out of a read call will cause a MathLinkException to be thrown by the method that is reading from the link. This MathLinkException is recoverable (meaning that clearError() will return true), so you could call waitForAnswer() again later and get the result when it arrives if you want. Return false from the yield function to indicate that no action should be taken (thus false is the normal return value for a yield function), and return true to indicate that J/Link should back out of the reading call. To turn off the yield function, call setYieldFunction(null, null, null).
Very few J/Link programmers will have any need for yield functions. They are a solution to a problem that is better handled in Java by using multiple threads. About the only reasonable motivation for using a yield function is to be able to back out of a computation that is taking too long and either resists attempts to abort it, or you know you want to close the link anyway. This can also be done by calling abandonEvaluation() on a separate thread. The abandonEvaluation() method is described in "Aborting and Interrupting Computations". Note that abandonEvaluation() uses a yield function internally, so calling it will wipe out any yield function you might have installed on your own.
Sending Object References to the Wolfram Language
The first part of this User Guide describes how to use J/Link to allow Wolfram Language code to launch a Java runtime, load Java classes and directly execute Java methods. What this means for you, the reader of this tutorial, who are probably writing your own program to launch and communicate with the Wolfram Language kernel, is that you can have a very high-level interaction with the Wolfram Language. You can send your own objects to the Wolfram Language and use them in Wolfram Language code, but you have to take a special step to enable this type of interaction.
Consider what happens if you have written a Java front end to the Wolfram Language kernel and a user of your program calls a Wolfram Language function that uses the "installable Java" features of J/Link and thus calls InstallJava in the Wolfram Language. InstallJava launches a separate Java runtime and proceeds to direct all J/Link traffic to that Java runtime. The kernel is blissfully unconcerned whether the front end that is driving it is the notebook front end or your Java program—it does the same thing in each case. This is fine and it is what many J/Link programmers will want. You do not have to worry about doing anything special if some Wolfram Language code happens to invoke the "installable Java" features of J/Link, because a separate Java runtime will be used.
But what if you want to make use of the ability that J/Link gives Wolfram Language code to interact with Java objects? You might want to send Java object references to the Wolfram Language and operate on them with Wolfram Language code. The Wolfram Language "operates" on Java objects by calling into Java, so any callbacks for such objects must be directed to your Java runtime. A further detail of J/Link is that it only supports one active Java runtime for all installable Java uses. What this all adds up to is that if you want to pass references to your own objects into the Wolfram Language, then you must call InstallJava and specify the link to your Java runtime, and you must do this before any function is executed that itself calls InstallJava. Actually, a number of steps need to be taken to enable J/Link callbacks into your Java environment, so J/Link includes a special method in the KernelLink interface, enableObjectReferences(), that takes care of everything for you.
public void enableObjectReferences() throws MathLinkException;
// For sending object references:
public void put(Object obj) throws MathLinkException;
public void putReference(Object obj) throws MathLinkException;
After calling enableObjectReferences(), you can use the KernelLink interface's put() or putReference() methods to send Java objects to the Wolfram Language, and they will arrive as JavaObject expressions that can be used in Wolfram Language code as described throughout "Calling Java from the Wolfram Language". Recall that the difference between the put() and putReference() methods is that put() sends objects that have meaningful "value" representations in the Wolfram Language (like arrays and strings) by value, and all others by reference. The putReference() method sends everything as a reference. If you want to use enableObjectReferences(), call it early on in your program, before you call putReference(). It requires that the JLink.m file be present in the expected location, which means that J/Link must be installed in the standard way on the machine that is running the kernel.
Once you have called enableObjectReferences(), not only can you send Java objects to the Wolfram Language, you can also read Java object references that the Wolfram Language sends back to Java. The getObject() method is used for this purpose. If a valid JavaObject expression is waiting on the link, getObject() will return the object that it refers to.
If you call enableObjectReferences() in your program, it is imperative that you do not try to write your own packet loop. Instead, you must use the KernelLink methods that encapsulate the reading and handling of packets until a result is received. These methods are waitForAnswer(), discardAnswer(), evaluateToInputForm(), evaluateToOutputForm(), evaluateToImage(), and evaluateToTypeset(). If you want to see all the incoming packets yourself, use a PacketListener object in conjunction with one of these methods. This is discussed in "Using the PacketListener Interface".
It is worthwhile to examine in more detail the question of why you would want to use enableObjectReferences(). Traditionally, WSTP programmers have worked with the C API, which limits the types of data that can be passed back and forth between C and the Wolfram Language to Wolfram Language expressions. Since Wolfram Language expressions are not generally meaningful in a C program, this translates basically to numbers, strings, and arrays of these things. The native structures that are meaningful in your C or C++ program (structs, objects, functions, and so on) are not meaningful in the Wolfram Language. As a result, programmers tend to use a simplistic one-way communication with the Wolfram Language, decomposing the native data structures and objects into simple components like numbers and strings. Program logic and behavior is coded entirely in C or C++, with the Wolfram Language used solely for mathematical computations.
In contrast, J/Link allows Java and Wolfram Language code to collaborate in a high-level way. You can easily code algorithms and other program behavior in the Wolfram Language if it is easier for you. As an example, say you are writing a Java servlet that needs to use the Wolfram Language kernel in some way. Your servlet's doGet() method will be called with HttpServletRequest and HttpServletResponse objects as arguments. One approach would be to extract the information you need out of these objects, package it up in some way for the Wolfram Language, and send the desired computation for evaluation. But another approach would be simply to send the HttpServletRequest and HttpServletResponse objects themselves to the Wolfram Language. You can then use the features and syntax described in "Calling Java from the Wolfram Language" to code the behavior of the servlet in the Wolfram Language, rather than in Java. Of course, these are just two extremes of a continuum. At one end you have the servlet behavior hard-coded into a compiled Java class file, and you make use of the Wolfram Language in a limited way, using a very narrow pipeline (narrow in the logical sense, passing only simple things like numbers, strings, or arrays). At the other end of the continuum you have a completely generic servlet that does nothing but forward all the work into the Wolfram Language. The behavior of the servlet is written completely in the Wolfram Language. You can use this approach even if you do not need the Wolfram Language as a mathematics engine—you might just find it easier to develop and debug your servlet logic in the Wolfram Language. You can draw the line between Java and the Wolfram Language anywhere you like along the continuum, doing whatever amount of work you prefer in each language.
In case you are wondering what such a generic servlet might look like, here is the doGet() method.
// ml.enableObjectReferences() must have been called prior, for example
// in the servlet's init method.
public void doGet(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException {
try {
ml.putFunction("EvaluatePacket", 1);
ml.putFunction("DoGet", 2);
// We could also use plain 'put' here, as these objects would be put
// by reference anyway.
ml.putReference(req);
ml.putReference(res);
ml.endPacket();
ml.discardAnswer();
} catch (MathLinkException e) {}
}
This would be accompanied by a Wolfram Language function DoGet that takes the two Java object arguments and implements the servlet behavior. The syntax is explained in "Calling Java from the Wolfram Language".
doGet[req_, resp_] :=
JavaBlock[
Module[{outStream},
outStream = resp@getOutputStream[];
outStream@print["<HTML> <BODY>"];
outStream@print["Hello World"];
outStream@print["</BODY> </HTML>"];
]
]
]
Some Special User Interface Classes
Introduction
J/Link has several classes that provide some very high-level user interface components for your Java programs. They are discussed individually in the next subsections. These classes are in the new com.wolfram.jlink.ui package, so do not forget to import that package if you want to use the classes.
ConsoleWindow
The ConsoleWindow class gives you a top-level frame window that displays output printed to the System.out and/or System.err streams. It has no input facilities. This is the class used to implement the Wolfram Language function ShowJavaConsole, discussed in "The Java Console Window". This class is quite useful for debugging Java programs that do not have a convenient place for console output. An example is a servlet—rather than digging around in your servlet container's log files after every run, you can just display a ConsoleWindow and see debugging output as it happens.
This class is a singleton, meaning that there is only ever one instance in existence. It has no public constructors. You call the static getInstance() method to acquire the sole ConsoleWindow object. Here is a code fragment that demonstrates how to use ConsoleWindow. You can find more information on this class in its JavaDoc page.
// Don't forget to import it (a different package than the rest of J/Link):
// import com.wolfram.jlink.ui.ConsoleWindow;
ConsoleWindow cw = ConsoleWindow.getInstance();
cw.setLocation(100, 100);
cw.setSize(450, 400);
cw.show();
// Specify that we want to capture System.out and System.err.
cw.setCapture(ConsoleWindow.STDOUT | ConsoleWindow.STDERR);
System.out.println("hello world from stdout");
System.err.println("hello world from stderr");
MathSessionPane
The MathSessionPane class provides an In/Out Wolfram System session window complete with a full set of editing functions including cut/copy/paste/undo/redo, support for graphics, syntax coloring, and customizable font styles. It is a bit like the Wolfram Language kernel's "terminal" interface, but much more sophisticated. You can easily drop it into any Java program that needs a full command-line interface to the Wolfram Language. The class is a Java Bean and will work nicely in a GUI builder environment. It has a very large number of properties that allow its look and behavior to be customized.
The best way to familiarize yourself with the features of MathSessionPane is to run the SimpleFrontEnd example program, found in the JLink/Examples/Part2/SimpleFrontEnd directory. SimpleFrontEnd is little more than a frame and menu bar that host a MathSessionPane. Essentially all the features you see are built into MathSessionPane, including the keyboard commands and the properties settable via the Options menu. To run this example, go to the SimpleFrontEnd directory and execute the following command line.
(Windows)
java -classpath SimpleFrontEnd.jar;..\..\..\JLink.jar SimpleFrontEnd
(Linux, Mac OS X):
java -classpath SimpleFrontEnd.jar:../../../JLink.jar SimpleFrontEnd
The application window will appear and you will be prompted to enter a path to a kernel to launch. Once the Wolfram Language is running, try various computations, including plots. Experiment with the numerous settings and commands on the menus. One feature of MathSessionPane not exposed via the SimpleFrontEnd menu bar is a highly customizable syntax coloring capability. The default behavior is to color built-in Wolfram Language symbols, but you can get as fancy as you like, such as specifying that symbols from a certain list should always appear in red, and symbols from a certain package should always appear in blue.
The methods and properties of MathSessionPane are described in greater detail in the JavaDocs, which are found in the JLink/Documentation/JavaDoc directory.
BracketMatcher and SyntaxTokenizer
The auxiliary classes BracketMatcher and SyntaxTokenizer are used by MathSessionPane, but can also be used separately to provide these services in your own programs. An example of the sort of program that would find these classes useful is a text-editor component that needs to have special features for Wolfram Language programmers.
These classes are described in greater detail in their JavaDoc pages. The JavaDocs for J/Link are found in the JLink/Documentation/JavaDoc directory. You can also look to see how they are used in the source code for the MathSessionPane class (MathSessionPane.java).
The BracketMatcher class locates matching bracket pairs (any of (), {}, [], and (**)) in Wolfram Language code. It ignores brackets within strings and within Wolfram Language comments, and it can accommodate nested comments. It searches in the typical way—expanding the current selection left and right to find the first enclosing matching brackets. To see its behavior in action, simply run the SimpleFrontEnd sample program discussed in the previous section on MathSessionPane and experiment with its bracket-matching feature.
SyntaxTokenizer is a utility class that can break up Wolfram Language code into four syntax classes: strings, comments, symbols, and normal (meaning everything else). You can use it to implement syntax coloring or a code analysis tool that can extract all comments or symbols from a file of Wolfram Language code.
InterruptDialog
The InterruptDialog class gives you an Interrupt Evaluation dialog box with choices for aborting, quitting the kernel, and so on, depending on what the kernel is doing at the time.
The InterruptDialog constructor takes a Dialog or Frame instance that will be the parent window of the dialog box. What you supply for this argument will typically be the main top-level window in your application. InterruptDialog implements the PacketListener interface, and you use it like any other PacketListener.
// Don't forget to import it (a different package than the rest of J/Link):
// import com.wolfram.jlink.ui.InterruptDialog;
ml.addPacketListener(new InterruptDialog(myParentFrame));
After the line of code is executed, whenever you interrupt a computation (by sending an MLINTERRUPTMESSAGE or, more commonly, by calling the KernelLink interruptEvaluation() method), a modal dialog box will appear with choices for how to proceed.
The SimpleFrontEnd sample program discussed in the section "MathSessionPane" makes use of an InterruptDialog. To see it in action, launch that sample program and execute the following Wolfram Language statement.
Then select Interrupt Evaluation from the Evaluation menu. The Interrupt Evaluation dialog box will appear and you can click the Abort Command Being Evaluated button to stop the computation. To use an InterruptDialog in your own program, your user interface must provide a means for users to send an interrupt request, such as an Interrupt button or special key combination. In response to this action, your program would call the KernelLink interruptEvaluation() method.
That a behavior as complex as a complete Interrupt Evaluation dialog box can be plugged into a Java program with only a single line of code is a testament to the versatility of the PacketListener interface, described in "Using the PacketListener Interface". The InterruptDialog class works by monitoring the incoming flow of packets from the kernel and detecting the special type of MenuPacket that the kernel sends after an interrupt request. Anytime you have some application logic that needs to know about packets that arrive from the Wolfram Language, you should implement it as a PacketListener.
Writing Applets
This User Guide has presented a lot of information about how to use J/Link to enable WSTP functionality in Java programs, whether those Java programs are applications, JavaBeans, servlets, applets, or anything else. If you want to write an applet that makes use of a local Wolfram Language kernel, you have some special considerations because you will need to escape the Java security "sandbox" within which the browser runs applets.
The only thing that J/Link needs special browser security permission for is to load the J/Link native library. The only reason the native library is required, or even exists at all, is to perform the translation between Java calls in the NativeLink class and Wolfram Research's platform-dependent WSTP library. NativeLink is the class that implements the MathLink interface in terms of native methods. Currently, every time you call MathLinkFactory.createMathLink() or MathLinkFactory.createKernelLink(), an instance of the NativeLink class is created, so the J/Link native library must be loaded. In other words, the only thing in J/Link that needs the native library is the NativeLink class, but currently all MathLink or KernelLink objects use a NativeLink object. You cannot do anything with J/Link without requiring the native library to be loaded.
Different browsers have different requirements for allowing applets to load native libraries. In many cases, the applet must be "signed", and the browser must have certain settings enabled. Note that letting Java applets launch local kernels is an extreme breach of security, since the Wolfram Language can read sensitive files, delete files, and so on. It is probably not a very good idea in general for users to allow applets to blast such an enormous hole in their browser's security sandbox. A better choice is to have Java applets use a kernel residing on the server. In this scenario, the browser's Java runtime does not need to load any local native libraries, so there are no security issues to overcome. This requires significant support on both the client and server side. This support is not part of J/Link itself, but it is a good example of the sort of programs J/Link can be used to create. | https://reference.wolfram.com/language/JLink/tutorial/WritingJavaProgramsThatUseTheWolframLanguage.html | CC-MAIN-2021-49 | refinedweb | 21,962 | 53.31 |
Eclipse is one of the most popular IDEs for Java & Spring application development. Spring has developed the Spring IDE plugin providing developers with Spring aware tooling for our projects.
SpringSource Tool Suite = {
Eclipse + SpringIDE + M2Eclipse +
GroovyEclipse + AJDT + EMF + WTP + DTP,
more...
}
Why use STS/SpringIDE
- Bean Configuration Editor – Content Sensitive bean editing. Defining beans in XML will give you the names of the properties when entering property values in XML. Multitab editor with tabs for Namespaces and namespace specific content (JDBC, Integration, etc).
- Beans Graph / Dependency Graph – visualizer for Spring config files & config sets
- Beans Cross Reference View – show bean references across multiple config files
- Beans Quick Cross Reference – cross ref info on beans in open config file
- Beans Quick Outline – outline of beans & properties in open config file
- Spring Beans Searching – by name, id, class, pointcut, etc.
- Spring Bean Validation – see: Preferences > Spring > Project Validators
- Spring MVC Request Mappings View – discovers all MVC annotations
- Java Editor Enhancements – Predefined shortcuts for Spring features, such as “POST” - in a controller will provide a controller method signature. Quick fixes for missing annotations, etc.
- Visualization – Graphical editors for Beans & Bean relationships, Spring Web Flow, Spring Batch, Spring Integration & Spring Aspects/AOP. Not just for show but editable too!
- Spring Explorer – Ever wondered… where are my bean config files? Switch to Spring Explorer view to see the beans for the project.
First Things
- Download and install STS from springsource.com/developer/sts
[or]
- Existing Eclipse installation? Download the Spring IDE bookmarks from Then, import the bookmarks file from Preferences > Install/Update > Available Update Sites.
What else is in the Box?
- Groovy/Grails Support
- Spring OSGi (Dynamic Modules/Eclipse Blueprint) Support
- Spring dm Server / Eclipse Virgo Support
- Spring Roo Support
- Gradle Support
- tcServer & Insight
- JDBC Support
- UML Diagramming
Q: Spring features not showing up in my project, what do I do?
A: Right click on the project, under “Spring Tools…” click “Add Spring Project Nature”
Q: Why does Spring Explorer not show any bean config (XML) files?
A: Right click on the project, select “Properties” – In the properties dialog, Spring > Beans Support – click the “Scan…” button.
Q: I imported a project but the red exclamation is over the icon?
A: If it is a Maven project, Right click on the project > Maven > Enable Dependency Management
Q: Why doesn’t Spring find my configuration files when I run my JUnit tests?
A: When using Maven, you need to run “process-resources” or “resources:resources”. Open Preferences > Maven and in the field “Goals to Run when updating project configuration” should have one of these values. Then right click on the project, choose Run As > Maven package [or] Maven Install
Q: Which version of Maven is STS running?
A: Check in Preferences > Maven > Installations – Eclipse Helios & Indigo will by default install an OLD version of Maven 3.0. It is recommended that you install a current release (3.0.3 at the time of this writing) and Add that to the installation list.
Q: How do I open the Maven pom file in XML mode instead of the GUI mode?
A: Preferences > Maven > POM Editor > check “Open XML page in the POM editor by default”
Q: How do I set AOP visualization to recognize my Spring Aspects?
A: Preferences > Visualiser > Check “Spring AOP Provider” in the “Available Providers” box.
Q: I do not write Aspects, why should I care about Aspect Visualization?
A: Even if you do not write your own aspects, Spring implements them via some annotations such as @Transactional
Q: What does the Spring Tools > Enable Spring Aspects tooling do?
A: Enables advanced JDT features – see: wiki.eclipse.org/JDT_weaving_features
Q: What do the letters “S”, “M”, “Aj”, etc mean over my project, directory and files in Eclipse?
A: “S” – Spring, “M” – Maven. “AJ” – AspectJ, “J” – Java
Q: What is the AOP Event Trace View?
A: This view shows what Spring is doing when build it’s internal AOP bean model.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/springsource-tool-suite-faq | CC-MAIN-2016-30 | refinedweb | 667 | 62.78 |
Intel AES-NI Optimization on Solaris
By Danx-Oracle on Nov 17, 2010
Intel AES-NI Optimization on Solaris
Introduction.
Since 2001, AES has been widely-adopted and is now a part of several data communication standards, such as WPA2 for wi-fi, IPsec for secure Internet transmission, SSH 2 for file and terminal access, and SSH v2 for secure web connections.
To improve performance Intel added 6 new instructions to the Intel64 instruction set, called AES-NI (for AES New Instructions). The AES-NI instructions are first available on the "Westmere" architecture microprocessors (some low-end Westmere chips for mobile/laptop use don't have AES-NI). Westmere processors are part of the Intel "Core" processor family and include the Xeon 5600 processors introduced in 2010. Oracle's Sun Fire X4170 M2 and X4270 M2 are two systems that use Xeon 5600 processors.
Previous Work
Previously, for OpenSolaris 2008.11/Solaris Nevada (build 93), I optimized AES by replacing optimized C code with optimized assembly. The optimized C code used previously was the optimized reference implementation written in C furnished by the authors of AES and first made available in Solaris 10. The optimized assembly code I used was based on Dr. Brian Gladman's AES implementation, which was also faster than the OpenSSL assembly. For details see my previous blog post, Optimizing OpenSolaris With Open Source: AES (2008).
Intel AES-NI Instruction Set
Intel AES-NI consists 6 instructions: AESENC, AESENCLAST, AESDEC, AESDECLAST, AESKEYGENASSIST, and AESIMC. AESENC performs one round of encryption (which consists of these steps: Substitute bytes, shift rows, mix columns, and add (xor) round key). AESENCLAST performs the final encryption round, which doesn't mix columns. Similarly AESDEC and AESDECLAST perform the one round of decryption.
Two more instructions perform key expansion of the user key, formatting it for internal use by the algorithm. The AESKEYGENASSIST instruction helps generate the round keys, used for encryption. The AESIMC then converts the encryption key, with an operation called Inverse Mix Columns, to a form suitable for decryption.
Cache Attack prevention with AES-NI
The most highly-optimized AES algorithms, including Dr. Gladman's, has a weakness under timing attacks, due to their use of large lookup tables. By pre-loading the microprocessor cache the AES table entries, and measuring the encryption time, once can can find what table entries were accessed. This information could be used to help reveal the secret key (although still difficult). Current software mitigation techniques against cache attacks carry significant performance penalties. However, AES-NI prevents such attacks because AES-NI instruction latency is fixed and data-independent.
Implementation
To implement AES-NI required a number of dependencies, briefly:
- getisax(2) and the Kernel's x86_feature/x86_featureset bit array needed to be expanded to detect and record the presence of Intel AES-NI instructions (CR 6750666). These bits are set by Solaris from the CPUID instruction.
- The Solaris amd64 assemblers, as(1) and fbe(1) needed to support the new AES-NI instructions (CR 6740663). The disassembler, dis(1) also was extended to display AES-NI (CR 6762031). Also, GNU binutils was updated to 2.19 to get the latest version of the GNU assembler, gas(1), with AES-NI support (CR 6644870).
Intel provided an implementation for OpenSSL to optimize AES using assembly that includes the AES-NI instructions and the 128-bit %xmm registers, %xmm0-%xmm16. The implementation is basically the same as in OpenSSL with minor differences in source. Changes include reordering the function parameters and structure types from OpenSSL to those defined in Solaris. In userland,
Everyone likes pretty color charts, so here they are..
Userland library performance The first chart compares AES128, AES192, and AES256 before and after the AES-NI optimization using the libpkcs11.so/libsoftcrypto.so libraries. The time shown is user time, in seconds on a quiet system running an internal micro-benchmark, aesspeed (lower is better). Runtime improved by 79%, 74%, and 79%, respectively.
Solaris kernel performance
This chart shows Solaris kernel performance using kernel module "aes".
This micro-benchmark, runs AES128 and AES256 in 4 threads
for 5000 iterations on 1024 bytes of data.
Numbers are is in 1000000 bytes/second (higher is better).
Performance improved here by 26% and 56%, respectively.
Solaris kernel performance
Finally, another Solaris kernel micro-benchmark.
This one is similar to the previous one, except it's running AES128 with 64 bytes of data on 1, 2, 3, and 4 threads.
Performance improved by 50% for the 1, 3, and 4 thread case.
The 2 thread case looks like an outlier.
Availability in Solaris
This feature is available only for Solaris x86 64-bit, running on a Intel microprocessor that supports the AES-NI instruction set.
- I integrated AES-NI optimization in Solaris build snv_114 (see Change Request CR 6767618), so it's available in Oracle Solaris 11 Express 2010.11.
- I also back-ported AES-NI optimization to Solaris 10 10/09 (aka update 8).
- AES-NI is available by default in Java through Java Cryptography Extension (JCE)'s PKCS#11 extension. PKCS#11 is an industry standard interface supported by Solaris and used by default by JCE. For more information see Ramesh Nagappan's blog Java Cryptography on Intel Westmere: Solaris Advantage.
More Information
- Intel has a detailed white paper on AES-NI written by Shay Gueron, Intel Advanced Encryption Standard (AES) Instructions Set (2008, 2010).
- Jeffrey Rott of Intel has a brief overview of AES-NI, "Intel Advanced Encryption Standard Instructions (AES-NI) (2010).
- The assembly source is in file $SRC/common/crypto/aes/amd64/aes_intel.s. This is common code for both the userland library, libsoftcrypto.so, and kernel module aes.
Disclaimer: the statements in this blog are my personal views, not that of my employer.
Hello,
this cannot be probably Lynnfield CPU. As none from Lynnfield line provides AES-NI. See
Cheers,
Karel
Posted by kcg on November 24, 2010 at 03:55 AM PST #
Karel,
You're right. I rechecked the lab notes. It's a Intel Piketon platform system with Clarkdale processor (with AES-NI). Another Piketon has a Lynnfield processor (no AES-NI), but not this one.
- Dan
Posted by Dan Anderson on November 24, 2010 at 05:23 AM PST #
Hello,
Your web server seems to be misconfigured. The content-type in the HTTP header differs from the one in the XHML document.
curl -I 2> /dev/null | grep -i content-type
Content-Type: text/plain
curl 2> /dev/null | grep -i content-type
<meta http-
Some browsers do not render the HTML then.
Best regards,
Michael
Posted by Michael on January 13, 2011 at 08:38 PM PST #
Michael,
The results are correct for me, text/html (see below). Strange. Perhaps you're using a web proxy server or nat translation service that's modifying the header?
- Dan
$ curl -I
HTTP/1.1 200 OK
Server: Sun-Java-System-Web-Server/7.0
. . .
Content-type: text/html;charset=utf-8
. . .
$curl
. . .
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd">
. . .
<meta http-
. . .
Posted by Daniel Anderson on January 13, 2011 at 11:42 PM PST #
Hi Dan!
I've got myself a box with a Xeon L3406 that, according to the Intel support, should support both AES and PCLMULDQD. However, the bandwidth I'm seeing for ZFS encryption is way lower than I expected.
I've checked with prtconf for the cpuid-features-ecx data. On the L3406 it reports 0098e3fd, on an i5-661 it reports 0298e3bf. If I understand the CPUID command and bits correctly, that means the L3406 has exactly AES and PCLMULDQD less than the i5.
Is there any way to force the kernel to use the optimized codepaths even if the CPU claims not to support the right instructions? Or any way to see whether they are being used?
The weird thing is that when I swapped the L3406 for an i5-661 I only saw a performance increase proportional to the increase in clock frequency. That sounds to me the optimized code is either off on both CPUs or on on both...
I'd post some actual numbers, but I'm not sure the OTN license allows me to do so. I'm CPU bound and have IO to spare. Could you tell me what ballpark throughput figures I should see with ZFS using aes-128-ccm AND your optimizations on a 2 GHz Clarkdale?
Posted by Elmar on February 08, 2011 at 08:07 AM PST #
According to AES-NI is NOT supported. Here's a free Solaris program to detect if AES-NI is supported:
#include <stdio.h>
#include <sys/auxv.h>
void main(void) {
uint_t ui = 0; (void) getisax(&ui, 1);
printf("AES-NI instructions are %spresent.\\n", (ui & AV_386_AES) ? "" : "not ");
}
Similarly, use AV_386_PCLMULQDQ to test for carryless multiply support.
If you want to look at bits, see "Intel Processor Identification and the CPUID Instruction" Application Note 485 on (Google it--the URL changes with each update). Remember bits are byte swapped on Intel (least-significant bits come first, Little Endian).
Posted by Dan Anderson on February 08, 2011 at 08:47 AM PST # | https://blogs.oracle.com/DanX/entry/intel_aes_ni_optimization_on | CC-MAIN-2015-18 | refinedweb | 1,530 | 56.55 |
redirect with tiles - Struts
frameset.
define all the three pages inside it. Hi friend,
Please specify in detail and send me code.
Thanks. I using tiles... in page left when i click it is page Body change. Help me!
Thanks very much
Hi.. - Struts
/struts/
Thanks. struts-tiles.tld: This tag library provides tiles...Hi.. Hi,
I am new in struts please help me what data write........its very urgent Hi Soniya,
I am sending you a link. This link
Hi - Struts
://
Thanks. Hi Soniya,
We can use oracle too in struts...Hi Hi friends,
must for struts in mysql or not necessary... know it is possible to run struts using oracle10g....please reply me fast its Plugin
Tiles Plugin I have used tiles plugin in my projects but now I am... code written in tiles definition to execute two times and my project may has become slow.
Hi Friend,
Please visit the following link:
http
tiles - Struts
Tiles in Struts Example of Titles in Struts Hi,We will provide you the running example by tomorrow.Thanks Hi,We will provide you the running example by tomorrow.Thanks
hi - XML
Thanks
Rajanikant Hi friend,
It is a format to share data...hi Can you plz tell mee about Rss feeds.What is the purpose of xml in RSS?
how we can develope an RSS using XML RSS generaly used
also its very urgent Hi Soniya,
I am sending you a link. I hope...hi... Hi Friends,
I am installed tomcat5.5 and open...
Thanks.
Amardeep want to installed tomcat5.0 version please... please help me. its very urgent Hi friend,
Some points to be remember... it in production
* Complete server monitoring using JMX and the manager web
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
Tiles - Struts
Inserting Tiles in JSP Can we insert more than one tiles in a JSP page
hi
online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me?
Please
Session management using tiles
Session management using tiles hi i am working on elearning project ..my problem is i am not able to maintain session over login's page..suppose if i logged in one user..and if i open another tab and logged in another account
hi
storing data in xml file using jsp hi i am storing data in xml file using jsp.in this when i enter data into xml file i am getting xml declaration...);
out.println("<b>Xml File Created Successfully</b>
hi!
hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....)
thanx for answering....
Hi...);
System.out.println(s);
}
}
Thanks
Developing Simple Struts Tiles Application
Developing Simple Struts Tiles Application
... will show you how to develop simple Struts Tiles
Application. You will learn how to setup the Struts Tiles and create example
page with it.
What is Struts
regarding sending mesage - JavaMail
me in this regard
Hi friend,
I am sending you a link...regarding sending mesage i have tried the following program... MimeMessage(session);
// Set the RFC 822 "From" header field using
Thanks - Java Beginners
Thanks Hi,
Thanks ur sending url is correct..And fullfill... and send me...
Thanks once again...for sending scjp link Hi friend... to visit :
Thanks
sending email code - JSP-Servlet
sending email code How To Send Emails using jsp Hi friend,
I am sending you a link. This link will help you.
Please visit for more information.
struts hi,
what is meant by struts-config.xml and wht are the tags... and search you get the jar file Hi friend,
struts.config.xml : Struts has.../struts/
Thanks
Java + XML - XML
. Hi friend,
I am sending you a link. This link will help you.
Please visit :
Thanks...Java + XML 1) I have some XML files,
read one xml
Sending email with read and delivery requests
Sending email with read and delivery requests Hi there,
I am sending emails using JavaMail in Servlets on behalf of a customer from the website... receipts. Anyone got any ideas, or experience. All advice appreciated!
Many thanks
struts tiles framework
struts tiles framework how could i include tld files in my web application
Am doing a project, in that i need to send email to multiple recipients at a same time using jsp so send me the code as soon as possible.
Regards,
Santhosh
Sending query with variable - JSP-Servlet
.
RESOLVE MY PROBLEM.
Thanks. Hi Friend,
Try the following:
1...Sending query with variable While displaying pages in frames... database and query should have a variable at the end. While using this variable we
struts
struts Hi
how struts flows from jsp page to databae and also using validation ?
Thanks
Kalins Naik
Regarding tiles - Struts
Regarding tiles I am taken image from Database.So, i am already... the session, its also shown. And I am also created one tiles for calling that image in the JSP, and insert the tiles in the respective papes, In which I want
xml - XML
xml hi
convert xml document to xml string.i am using below code...-an-xml-document-using-jd.shtml
Hope that it will be helpful for you... = stw.toString();
after i am getting xml string result, like
Successxxx profile
use of Struts - Struts
use of Struts Hi,
can anybody tell me what is the importance of sturts? why we are using it?
Hitendra Hi,
I am sending... example.
Thanks
sending mail - JSP-Servlet
sending mail Hi,
what is the code for sending mail automatically without user intervention? thanks in advance
Email queue while sending mail using Struts Class Can I maintain a queue of mails being sent from a JSP page in a DB to get its status
sending emails - JavaMail
sending emails what is the code for sending emails in java Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
validation using validator-rules.xml - Struts
validation using validator-rules.xml Hi I am trying to validate my form using Validator-rules.xml.
I am using Eclipse 3.0 Struts 1.1 and Tomcat... the form.
Please let me know the correct code
Thanks in advance
Hi
Struts - Struts
of ? Hi friend,
I am sending you a link. This link will help you. Please visit for more information.
Thanks
hi.... - Java Beginners
hi.... Hi friends
i am using ur sending code but problem... very urgent
Hi ragini,
First time put the hard code after...-with-jsp.shtml
Thanks
sending email using smtp in java
sending email using smtp in java Hi all,
I am trying to send and email to through my company mail server. Following is my code
package com.tbss;
import javax.mail.*;
import javax.mail.internet.*;
import
Hi.... - Java Beginners
Hi.... Hi Friends,
Thanks for reply can send me sample of code using ur idea...
First part i have completed but i want to how to go....
For example : Java/JSP/JSF/Struts 1/Struts 2 etc....
Thanks
Struts Project Planning - Struts
Struts Project Planning Hi all,
I am creating a struts application... through and which manner.
Thanks in Advance
Casionvaguy Hi friend... classes and which they i should create??
My application wil be using database so,
I m getting Error when runing struts application.
i have already define path in web.xml
i m sending --
ActionServlet... for more information.
Thanks
struts - Struts
struts how the database connection is establised in struts while using ant and eclipse? Hi friend,
Read for more information.
Thanks
Sending File - JSP-Servlet
Sending File How to use tag in Jsp, & How read a file fom Client Hi Friend,
We used tag to upload a file.
Please visit....
Thanks
Struts - Struts
. Acctually i m using netbeans ide when i select a new web application for struts its....
thanks and regards
Sanjeev Hi friend,
For more information.../struts/
Thanks
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to me
sending a mail - JSP-Servlet
changes..
thanks in advance..
Hi Friend,
Please visit...sending a mail I m writing a code for send mail in jsp,i sending... that it will be helpful for you.
Thanks
Read XML using Java
Read XML using Java Hi All,
Good Morning,
I have been working... of all i need to read xml using java . i did good research in google and came to know...();
}
}
}
Parse XML using JDOM
import java.io.*;
import org.jdom.
XML
XML Hi......
Please tel me about that
Aren't XML, SGML, and HTML all the same thing?
Thanks
Reg XML - XML
Reg XML How can I become an XML programmer?What are the channels I possess on INTERNET
Thanks & Regards
Ravi Pullela Hi,
XML...://
Thanks.
Amardeep
struts - Struts
struts Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please send it immediately.
Regards,
Valarmathi
Hi Friend,
Please
sending automatic email - JavaMail
sending automatic email Dear sir.
In my project i need to send... days.i am using jsp,mysql and tomcat for my project.Expire information are stored....
thanks
Exact I dont how you approach your application
Hi
Hi Hi this is really good example to beginners who is learning struts2.0
thanks
Struts - Struts
explaination and example? thanks in advance. Hi Friend,
It is not thread...://
Thanks...Struts Is Action class is thread safe in struts? if yes, how
iphone mail sending problem
iphone mail sending problem Hi, I'm receiving the following error ... while sending mail in my iphone application
Terminating app due to uncaught... getting this error and to solve it?
Thanks in Advance!
Hi all,
I get
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
HI.
HI. hi,plz send me the code for me using search button bind the data from data base in dropdownlist
jquery alert dialog box using struts2 how to develop jquery alert dialog box using struts2
Sending mail - JavaMail
Sending mail Need a simple example of sending mail in Java Hi,To send email you need a local mail server such as apache james. You first... emails using outlook client.From java program you can also send email.Here
hi
hi add any two numbers using bitwise operatorsprint("code sample
hi
hi servlet program to retrieve data from database using session object
Using radio button in struts - Struts
Using radio button in struts Hello to all ,
I have a big problem... single selection). Hi friend,
Please give full details and full source code to solve the problem :
For more information on radio in Struts! public NewJFrame() {
initComponents();
try
{
Class.forName("java.sql.Driver");
con=DriverManager.getConnection("jdbc... for ma project...note that its made by using tools lik textfields , button etc | http://roseindia.net/tutorialhelp/comment/176 | CC-MAIN-2014-15 | refinedweb | 1,853 | 77.33 |
import "path/filepath"(path string) string
Ext returns the file name extension used by path. The extension is the suffix beginning at the final dot in the final element of path; it is empty if there is no dot.
Code:
fmt.Printf("No dots: %q\n", filepath.Ext("index")) fmt.Printf("One dot: %q\n", filepath.Ext("index.js")) fmt.Printf("Two dots: %q\n", filepath.Ext("main.test.js"))
Output:
No dots: "" One dot: ".js" Two dots: ".js"..
Code:
paths := []string{ "/a/b/c", "/b/c", "./b/c", } base := "/a" fmt.Println("On Unix:") for _, p := range paths { rel, err := filepath.Rel(base, p) fmt.Printf("%q: %q %v\n", p, rel, err) }
Output:.
Code:) }
Output:.
Code:
fmt.Println("On Unix:", filepath.SplitList("/a/b/c:/usr/bin"))
Output:.
Code:) }. | https://static-hotlinks.digitalstatic.net/path/filepath/ | CC-MAIN-2018-51 | refinedweb | 132 | 72.63 |
Hello,
I have an issue with my need to upload an image to my web server.
I'm using Plugin.Media from James Montemagno. I use this code to let the user select an image.
var media = CrossMedia.Current;
var file = await media.PickPhotoAsync();
Then, I have to convert that "file" to base64 to upload it to my web server, but I don't know how to do that since in PCL there's no System.IO.File.
How can I do that? Do you have any hint?
Thanks!
Answers
I think the NuGet package 'Shim' provides a number of stubs for missing namespaces, including
System.IO.File
You could do something like this:
I get this error:
I will try with Shim
Sorry, it should be
stream.ReadAsync, not
WriteAsync.
Yes got it, thank you!
wow it help me so much thank you guys specialy @JoeManke
Hi,
I tried the solution above, but when I copy the string value to an online base64-to-image-converter, the image is cropped, only the top part of the image is rendered and not the whole image. Am I missing something? I tried it on iOS device. | https://forums.xamarin.com/discussion/comment/247192 | CC-MAIN-2019-43 | refinedweb | 195 | 85.39 |
#include <DnsLayer.h>
Represents the DNS over TCP layer. DNS over TCP is described here: . It is very similar to DNS over UDP, except for one field: TCP message length which is added in the beginning of the message before the other DNS data properties. The rest of the data is similar.
Note: DNS over TCP can spread over more than one packet, but this implementation doesn't support this use-case and assumes the whole message fits in a single packet.
A constructor that creates the layer from an existing packet raw data
A constructor that creates an empty DNS layer: all members of dnshdr are set to 0 and layer will contain no records
A copy constructor for this layer
Calculate the TCP message length field
Reimplemented from pcpp::DnsLayer.
Set the TCP message length value as described in | https://pcapplusplus.github.io/api-docs/classpcpp_1_1_dns_over_tcp_layer.html | CC-MAIN-2021-25 | refinedweb | 141 | 60.04 |
Okay, creative differences are getting sorted out. More fun on the way.
Okay, creative differences are getting sorted out. More fun on the way.
Well, Castor and I are having creative differences about what kind of documentation they need. Still, they're adding new docs, which is good. Any new information on their site regarding SQL binding is better than what's there now.
In the meantime, I'll finish up what I started and get it out in view somehow. Hmmm, if I could just get enough people to cert me here...
Still.
Dear me:
Got started on the Castor SQL binding docs. Pretty much followed the first two sections of the XML binding doc as a template. Now I'm stuck for a single good example, although I have a rather lame one that will work.
Spent considerable time on Saturday experimenting with hiding the Castor-genned classes (from an XML Schema source) behind a facade of objects inheriting from them. I'm not sure how viable the idea is, but two Castor features would be necessary to even try to make it work:
Oh well... now I'll have to generate classes when the schema changes and hand merge them with changes already made to existing generated classes. Pain in the butt, but long term that's probably how it would end up, anyway.
Maybe a good beans editor could replace the XML Schema code generator so that I don't have to write those annoying accessor/mutator methods.
Ciao
Dear me:
My first post here. Let's see if the habit sticks...
Current projects:
Write Castor JDO for SQL document. Some of this, at least, will wind up on the Castor website as they have a sore lack of documentation in this area. It cost me a week of pain to learn how to make the JDO mapping work; I hope to reduce that for others to about a day.
Need to get started on an XML Namespace filter that replaces namespace prefixes in an XML document with a Name- character encoded namespace identifier. The target document will still be a well-formed XML document, but the effect will be to replace QNames with encoded universal names (UNames). I have a feeling that some will see this as abuse of XML (encapsulating information w/o use of markup), but I'll leave it up to the practitioners to weigh the pros and cons for themselves.
Free advice: Listen to the experts, but don't let them decide for you.
Other immediate plans: Write XML Schema macros and plugins for Arachnophilia. The result will be a passable free XML Schema editor.
Try to find something useful to do with XSLT. So far my work just hasn't required it, which probably means I'm not looking hard enough.
Current ICC rating: 18! | http://www.advogato.org/person/jeffalo/diary.html?start=4 | CC-MAIN-2015-06 | refinedweb | 475 | 73.37 |
The QRegExp class provides pattern matching using regular expressions or wildcards. More...
#include <qregexp.h>
List of all member functions.
QRegExp knows these regexp primitives:
In wildcard mode, it only knows four primitives:
QRegExp supports Unicode both in the pattern strings and in the strings to be matched.
When writing regular expressions in C++ code, remember that C++ processes \ characters. So in order to match e.g. a "." character, you must write "\\." in C++ source, not "\.".
A character set matches a defined set of characters. For example, [BSD] matches any of 'B', 'D' and 'S'. Within a character set, the special characters '.', '*', '?', '^', '$', '+' and '[' lose their special meanings. The following special characters apply:
\note In Qt 3.0, the language of regular expressions will contain five more special characters, namely '(', ')', '{', '|' and '}'. To ease porting, it's a good idea to escape these characters with a backslash in all the regular expressions you'll write from now on.
Bugs and limitations:
Examples: qmag/qmag.cpp
Constructs an empty regular expression.
Constructs a regular expression.
Arguments:
See also setWildcard().
Constructs a regular expression which is a copy of r.
See also operator=(const and QRegExp&).
Destructs the regular expression and cleans up its internal data.
Returns TRUE if case sensitivity is enabled, otherwise FALSE. The default is TRUE.
See also setCaseSensitive().
Attempts to match in str, starting from position index. Returns the position of the match, or -1 if there was no match.
See also match().
Returns TRUE if the regexp is empty.
Returns TRUE if the regexp is valid, or FALSE if it is invalid.
The pattern "[a-z" is an example of an invalid pattern, since it lacks a closing bracket.
Attempts to match in str, starting from position index. Returns the position of the match, or -1 if there was no match.
If len is not a null pointer, the length of the match is stored in *len.
If indexIsStart is TRUE (the default), the position index in the string will match the start-of-input primitive (^) in the regexp, if present. Otherwise, position 0 in str will match.
Example:
QRegExp r("[0-9]*\\.[0-9]+"); // matches floating point int len; r.match("pi = 3.1416", 0, &len); // returns 5, len == 6
\note In Qt 3.0, this function will be replaced by find().
Examples: qmag/qmag.cpp
Returns TRUE if this regexp is not equal to r.
See also operator==().
This function is obsolete. It is provided to keep old source working, and will probably be removed in a future version of Qt. We strongly advise against using it in new code.
Consider using setPattern() instead of this method.
Sets the pattern string to pattern and returns a reference to this regexp. The case sensitivity or wildcard options do not change.
Copies the regexp r and returns a reference to this regexp. The case sensitivity and wildcard options are copied, as well.
Returns TRUE if this regexp is equal to r.
Two regexp objects are equal if they have equal pattern strings, case sensitivity options and wildcard options.
Returns the pattern string of the regexp.
Enables or disables case sensitive matching.
In case sensitive mode, "a.e" matches "axe" but not "Axe".
See also: caseSensitive().
Sets the pattern string to pattern and returns a reference to this regexp. The case sensitivity or wildcard options do not change.
Sets the wildcard option for the regular expression. The default is FALSE.
Setting wildcard to TRUE makes it convenient to match filenames instead of plain text.
For example, "qr*.cpp" matches the string "qregexp.cpp" in wildcard mode, but not "qicpp" (which would be matched in normal mode).
See also wildcard().
Returns TRUE if wildcard mode is on, otherwise FALSE.
See also setWildcard().
[protected]
For internal use only.
[protected]
For internal use only.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/2.3/qregexp.html | crawl-002 | refinedweb | 656 | 70.8 |
The software world is always atwitter with predictions on the next big piece of technology. And a lot of chatter focuses on what venture capitalists express interest in. As an investor, how do you pick a good company to invest in? Do you notice quirky names like “Kaggle” and “Meebo,” require deep technical abilities, or value a charismatic sales pitch?
This author personally believes we’re not thinking as big as we should be when it comes to innovation in software engineering and computer science, and that as a society we should value big pushes forward much more than we do. But making safe investments is almost always at odds with innovation. And so every venture capitalist faces the following question. When do you focus investment in those companies that have proven to succeed, and when do you explore new options for growth? A successful venture capitalist must strike a fine balance between this kind of exploration and exploitation. Explore too much and you won’t make enough profit to sustain yourself. Narrow your view too much and you will miss out on opportunities whose return surpasses any of your current prospects.
In life and in business there is no correct answer on what to do, partly because we just don’t have a good understanding of how the world works (or markets, or people, or the weather). In mathematics, however, we can meticulously craft settings that have solid answers. In this post we’ll describe one such scenario, the so-called multi-armed bandit problem, and a simple algorithm called UCB1 which performs close to optimally. Then, in a future post, we’ll analyze the algorithm on some real world data.
As usual, all of the code used in the making of this post are available for download on this blog’s Github page.
Multi-Armed Bandits
The multi-armed bandit scenario is simple to describe, and it boils the exploration-exploitation tradeoff down to its purest form.
Suppose you have a set of
actions labeled by the integers
. We call these actions in the abstract, but in our minds they’re slot machines. We can then play a game where, in each round, we choose an action (a slot machine to play), and we observe the resulting payout. Over many rounds, we might explore the machines by trying some at random. Assuming the machines are not identical, we naturally play machines that seem to pay off well more frequently to try to maximize our total winnings.
This is the most general description of the game we could possibly give, and every bandit learning problem has these two components: actions and rewards. But in order to get to a concrete problem that we can reason about, we need to specify more details. Bandit learning is a large tree of variations and this is the point at which the field ramifies. We presently care about two of the main branches.
How are the rewards produced? There are many ways that the rewards could work. One nice option is to have the rewards for action
be drawn from a fixed distribution
(a different reward distribution for each action), and have the draws be independent across rounds and across actions. This is called the stochastic setting and it’s what we’ll use in this post. Just to pique the reader’s interest, here’s the alternative: instead of having the rewards be chosen randomly, have them be adversarial. That is, imagine a casino owner knows your algorithm and your internal beliefs about which machines are best at any given time. He then fixes the payoffs of the slot machines in advance of each round to screw you up! This sounds dismal, because the casino owner could just make all the machines pay nothing every round. But actually we can design good algorithms for this case, but “good” will mean something different than absolute winnings. And so we must ask:
How do we measure success? In both the stochastic and the adversarial setting, we’re going to have a hard time coming up with any theorems about the performance of an algorithm if we care about how much absolute reward is produced. There’s nothing to stop the distributions from having terrible expected payouts, and nothing to stop the casino owner from intentionally giving us no payout. Indeed, the problem lies in our measurement of success. A better measurement, which we can apply to both the stochastic and adversarial settings, is the notion of regret. We’ll give the definition for the stochastic case, and investigate the adversarial case in a future post.
Definition: Given a player algorithm
and a set of actions
, the cumulative regret of
in rounds
is the difference between the expected reward of the best action (the action with the highest expected payout) and the expected reward of
for the first
rounds.
We’ll add some more notation shortly to rephrase this definition in symbols, but the idea is clear: we’re competing against the best action. Had we known it ahead of time, we would have just played it every single round. Our notion of success is not in how well we do absolutely, but in how well we do relative to what is feasible.
Notation
Let’s go ahead and draw up some notation. As before the actions are labeled by integers
. The reward of action
is a
-valued random variable
distributed according to an unknown distribution and possessing an unknown expected value
. The game progresses in rounds
so that in each round we have different random variables
for the reward of action
in round
(in particular,
and
are identically distributed). The
are independent as both
and
vary, although when
varies the distribution changes.
So if we were to play action 2 over and over for
rounds, then the total payoff would be the random variable
. But by independence across rounds and the linearity of expectation, the expected payoff is just
. So we can describe the best action as the action with the highest expected payoff. Define
We call the action which achieves the maximum
.
A policy is a randomized algorithm
which picks an action in each round based on the history of chosen actions and observed rewards so far. Define
to be the action played by
in round
and
to be the number of times we’ve played action
in rounds
. These are both random variables. Then the cumulative payoff for the algorithm
over the first
rounds, denoted
, is just
and its expected value is simply
.
Here the expectation is taken over all random choices made by the policy and over the distributions of rewards, and indeed both of these can affect how many times a machine is played.
Now the cumulative regret of a policy
after the first
steps, denoted
can be written as
And the goal of the policy designer for this bandit problem is to minimize the expected cumulative regret, which by linearity of expectation is
.
Before we continue, we should note that there are theorems concerning lower bounds for expected cumulative regret. Specifically, for this problem it is known that no algorithm can guarantee an expected cumulative regret better than
. It is also known that there are algorithms that guarantee no worse than
expected regret. The algorithm we’ll see in the next section, however, only guarantees
. We present it on this blog because of its simplicity and ubiquity in the field.
The UCB1 Algorithm
The policy we examine is called UCB1, and it can be summed up by the principle of optimism in the face of uncertainty. That is, despite our lack of knowledge in what actions are best we will construct an optimistic guess as to how good the expected payoff of each action is, and pick the action with the highest guess. If our guess is wrong, then our optimistic guess will quickly decrease and we’ll be compelled to switch to a different action. But if we pick well, we’ll be able to exploit that action and incur little regret. In this way we balance exploration and exploitation.
The formalism is a bit more detailed than this, because we’ll need to ensure that we don’t rule out good actions that fare poorly early on. Our “optimism” comes in the form of an upper confidence bound (hence the acronym UCB). Specifically, we want to know with high probability that the true expected payoff of an action
is less than our prescribed upper bound. One general (distribution independent) way to do that is to use the Chernoff-Hoeffding inequality.
As a reminder, suppose
are independent random variables whose values lie in
and whose expected values are
. Call
and
. Then the Chernoff-Hoeffding inequality gives an exponential upper bound on the probability that the value of
deviates from its mean. Specifically,
For us, the
will be the payoff variables for a single action
in the rounds for which we choose action
. Then the variable
is just the empirical average payoff for action
over all the times we’ve tried it. Moreover,
is our one-sided upper bound (and as a lower bound, sometimes). We can then solve this equation for
to find an upper bound big enough to be confident that we’re within
of the true mean.
Indeed, if we call
the number of times we played action
thus far, then
in the equation above, and using
we get that
, which converges to zero very quickly as the number of rounds played grows. We’ll see this pop up again in the algorithm’s analysis below. But before that note two things. First, assuming we don’t play an action
, its upper bound
grows in the number of rounds. This means that we never permanently rule out an action no matter how poorly it performs. If we get extremely unlucky with the optimal action, we will eventually be convinced to try it again. Second, the probability that our upper bound is wrong decreases in the number of rounds independently of how many times we’ve played the action. That is because our upper bound
is getting bigger for actions we haven’t played; any round in which we play an action
, it must be that
, although the empirical mean will likely change.
With these two facts in mind, we can formally state the algorithm and intuitively understand why it should work.
UCB1:
Play each of the
actions once, giving initial values for empirical mean payoffs
of each action
.
For each round
:
Let
represent the number of times action
was played so far.
Play the action
maximizing
.
Observe the reward
and update the empirical mean for the chosen action.
And that’s it. Note that we’re being super stateful here: the empirical means
change over time, and we’ll leave this update implicit throughout the rest of our discussion (sorry, functional programmers, but the notation is horrendous otherwise).
Before we implement and test this algorithm, let’s go ahead and prove that it achieves nearly optimal regret. The reader uninterested in mathematical details should skip the proof, but the discussion of the theorem itself is important. If one wants to use this algorithm in real life, one needs to understand the guarantees it provides in order to adequately quantify the risk involved in using it.
Theorem: Suppose that UCB1 is run on the bandit game with
actions, each of whose reward distribution
has values in [0,1]. Then its expected cumulative regret after
rounds is at most
.
Actually, we’ll prove a more specific theorem. Let
be the difference
, where
is the expected payoff of the best action, and let
be the minimal nonzero
. That is,
represents how suboptimal an action is and
is the suboptimality of the second best action. These constants are called problem-dependent constants. The theorem we’ll actually prove is:
Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret
is at most
Okay, this looks like one nasty puppy, but it’s actually not that bad. The first term of the sum signifies that we expect to play any suboptimal machine about a logarithmic number of times, roughly scaled by how hard it is to distinguish from the optimal machine. That is, if
is small we will require more tries to know that action
is suboptimal, and hence we will incur more regret. The second term represents a small constant number (the
part) that caps the number of times we’ll play suboptimal machines in excess of the first term due to unlikely events occurring. So the first term is like our expected losses, and the second is our risk.
But note that this is a worst-case bound on the regret. We’re not saying we will achieve this much regret, or anywhere near it, but that UCB1 simply cannot do worse than this. Our hope is that in practice UCB1 performs much better.
Before we prove the theorem, let’s see how derive the
bound mentioned above. This will require familiarity with multivariable calculus, but such things must be endured like ripping off a band-aid. First consider the regret as a function
(excluding of course
), and let’s look at the worst case bound by maximizing it. In particular, we’re just finding the problem with the parameters which screw our bound as badly as possible, The gradient of the regret function is given by
and it’s zero if and only if for each
,
. However this is a minimum of the regret bound (the Hessian is diagonal and all its eigenvalues are positive). Plugging in the
(which are all the same) gives a total bound of
. If we look at the only possible endpoint (the
), then we get a local maximum of
. But this isn’t the
we promised, what gives? Well, this upper bound grows arbitrarily large as the
go to zero. But at the same time, if all the
are small, then we shouldn’t be incurring much regret because we’ll be picking actions that are close to optimal!
Indeed, if we assume for simplicity that all the
are the same, then another trivial regret bound is
(why?). The true regret is hence the minimum of this regret bound and the UCB1 regret bound: as the UCB1 bound degrades we will eventually switch to the simpler bound. That will be a non-differentiable switch (and hence a critical point) and it occurs at
. Hence the regret bound at the switch is
, as desired.
Proving the Worst-Case Regret Bound
Proof. The proof works by finding a bound on
, the expected number of times UCB chooses an action up to round
. Using the
notation, the regret is then just
, and bounding the
‘s will bound the regret.
Recall the notation for our upper bound
and let’s loosen it a bit to
so that we’re allowed to “pretend” a action has been played
times. Recall further that the random variable
has as its value the index of the machine chosen. We denote by
the indicator random variable for the event
. And remember that we use an asterisk to denote a quantity associated with the optimal action (e.g.,
is the empirical mean of the optimal action).
Indeed for any action
, the only way we know how to write down
is as
The 1 is from the initialization where we play each action once, and the sum is the trivial thing where just count the number of rounds in which we pick action
. Now we’re just going to pull some number
of plays out of that summation, keep it variable, and try to optimize over it. Since we might play the action fewer than
times overall, this requires an inequality.
These indicator functions should be read as sentences: we’re just saying that we’re picking action
in round
and we’ve already played
at least
times. Now we’re going to focus on the inside of the summation, and come up with an event that happens at least as frequently as this one to get an upper bound. Specifically, saying that we’ve picked action
in round
means that the upper bound for action
exceeds the upper bound for every other action. In particular, this means its upper bound exceeds the upper bound of the best action (and
might coincide with the best action, but that’s fine). In notation this event is
Denote the upper bound
for action
in round
by
. Since this event must occur every time we pick action
(though not necessarily vice versa), we have
We’ll do this process again but with a slightly more complicated event. If the upper bound of action
exceeds that of the optimal machine, it is also the case that the maximum upper bound for action
we’ve seen after the first
trials exceeds the minimum upper bound we’ve seen on the optimal machine (ever). But on round
we don’t know how many times we’ve played the optimal machine, nor do we even know how many times we’ve played machine
(except that it’s more than
). So we try all possibilities and look at minima and maxima. This is a pretty crude approximation, but it will allow us to write things in a nicer form.
Denote by
the random variable for the empirical mean after playing action
a total of
times, and
the corresponding quantity for the optimal machine. Realizing everything in notation, the above argument proves that
Indeed, at each
for which the max is greater than the min, there will be at least one pair
for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs
for which it happens. That is, we can expand the event above into the double sum which is at least as large:
We can make one other odd inequality by increasing the sum to go from
to
. This will become clear later, but it means we can replace
with
and thus have
Now that we’ve slogged through this mess of inequalities, we can actually get to the heart of the argument. Suppose that this event actually happens, that
. Then what can we say? Well, consider the following three events:
(1)
(2)
(3)
In words, (1) is the event that the empirical mean of the optimal action is less than the lower confidence bound. By our Chernoff bound argument earlier, this happens with probability
. Likewise, (2) is the event that the empirical mean payoff of action
is larger than the upper confidence bound, which also occurs with probability
. We will see momentarily that (3) is impossible for a well-chosen
(which is why we left it variable), but in any case the claim is that one of these three events must occur. For if they are all false, we have
and
But putting these two inequalities together gives us precisely that (3) is true:
This proves the claim.
By the union bound, the probability that at least one of these events happens is
plus whatever the probability of (3) being true is. But as we said, we’ll pick
to make (3) always false. Indeed
depends on which action
is being played, and if
then
, and by the definition of
we have
.
Now we can finally piece everything together. The expected value of an event is just its probability of occurring, and so
The second line is the Chernoff bound we argued above, the third and fourth lines are relatively obvious algebraic manipulations, and the last equality uses the classic solution to Basel’s problem. Plugging this upper bound in to the regret formula we gave in the first paragraph of the proof establishes the bound and proves the theorem.
Implementation and an Experiment
The algorithm is about as simple to write in code as it is in pseudocode. The confidence bound is trivial to implement (though note we index from zero):
def upperBound(step, numPlays): return math.sqrt(2 * math.log(step + 1) / numPlays)
And the full algorithm is quite short as well. We define a function ub1, which accepts as input the number of actions and a function reward which accepts as input the index of the action and the time step, and draws from the appropriate reward distribution. Then implementing ub1 is simply a matter of keeping track of empirical averages and an argmax. We implement the function as a Python generator, so one can observe the steps of the algorithm and keep track of the confidence bounds and the cumulative regret.
def ucb1(numActions, reward): payoffSums = [0] * numActions numPlays = [1] * numActions ucbs = [0] * numActions # initialize empirical sums for t in range(numActions): payoffSums[t] = reward(t,t) yield t, payoffSums[t], ucbs t = numActions while True: ucbs = [payoffSums[i] / numPlays[i] + upperBound(t, numPlays[i]) for i in range(numActions)] action = max(range(numActions), key=lambda i: ucbs[i]) theReward = reward(action, t) numPlays[action] += 1 payoffSums[action] += theReward yield action, theReward, ucbs t = t + 1
The heart of the algorithm is the second part, where we compute the upper confidence bounds and pick the action maximizing its bound.
We tested this algorithm on synthetic data. There were ten actions and a million rounds, and the reward distributions for each action were uniform from
, biased by
for some
. The regret and theoretical regret bound are given in the graph below.
The regret of ucb1 run on a simple example. The blue curve is the cumulative regret of the algorithm after a given number of steps. The green curve is the theoretical upper bound on the regret.
Note that both curves are logarithmic, and that the actual regret is quite a lot smaller than the theoretical regret. The code used to produce the example and image are available on this blog’s Github page.
Next Time
One interesting assumption that UCB1 makes in order to do its magic is that the payoffs are stochastic and independent across rounds. Next time we’ll look at an algorithm that assumes the payoffs are instead adversarial, as we described earlier. Surprisingly, in the adversarial case we can do about as well as the stochastic case. Then, we’ll experiment with the two algorithms on a real-world application.
Until then!
Lucid explanation. Thanks
Hi Jeremy,
Regarding $X_{t,i}$ I should add since you assume they have the same mean it appears to me that you are assuming [implicitly] that they are from the same distribution which from my point of view justifies replaceing it with $X_i$. Am I wrong?
I think it’s notationally more appropriate to have separate random variables, each drawn from the same distribution, because there is a difference between
and
as random variables. For example, one is affected by statements like the Central Limit Theorem. hope the identical part was clear from the prose. see. I am just so skeptic 🙂 . At least now I am sure the claim was exactly about i.i.d.s
i think you have a code error in your python code. in ucb1 function, you never actually applied part 2 algorithms because it stops at the initialize empirical sums step.
The python function is a generator, so you have to iterate through the generator in order for it to run all the steps. For example, you might run:
for roundinfo in ucb1(…):
print(roundinfo)
This is a really well-written explanation of UCB1, Jeremy. Thanks!
Just had one question though:
How do you know when to stop the experiment? Is the computed upper bound value ( Math.sqrt(2*logT/n_j) ) analogous to the confidence interval? So if I had three arms after T plays with, say, upper bound values of: 0.09, 0.08, and 0.05, what does that actually mean?
It depends on your goal. In practice, often the goal is to serve ads to users, which never ends. If you’re trying to identify the single best arm, then you have to add additional assumptions to find it because, even though the process will converge in the limit to a distribution with overwhelming weight on the best arm, the convergence rate will depend on the difference of the payoff of the best arm and the second best arm, but you can use the last line of the main proof (along with a union bound over all the arms) to bound the probability that you pick a suboptimal arm. Then set that less than your threshold and solve for T.
Why “then another trivial regret bound is \Delta T” ?
In this case you never pick an optimal action (and all the penalties for suboptimal actions are the same). So you incur the most regret in every of the T rounds.
May I know, is UCB a good algorithm for adversarial settings, where an expert changes the reward in each iteration?
In practice, perhaps. In theory, no. See EXP3 and it’s variations for that. e.g.,
Technical point: I don’t think you can reduce to all the $\Delta_i$ being the same and then compare to the function $\Delta T$ without some further argument. You’ll have to maximize a combined function $\min(R(\Delta_1,\ldots),T \max \Delta_i)$, or something like this.
Hello brother, congrats by your explanation! Is great, but I had some problems on running: appears this log error, when I run your code
for roundinfo in ucb1(100, 1):
print(roundinfo)
Appears that on log:
Traceback (most recent call last):
File “/home/ubuntu/workspace/ex50/bin/mab.py”, line 35, in
for roundinfo in ucb1(100, 1):
File “/home/ubuntu/workspace/ex50/bin/mab.py”, line 13, in ucb1
payoffSums[t] = reward(t,t)
TypeError: ‘int’ object is not callable
Can you help me please? Thankful!
Yes. The
rewardinput to
ucb1is a function that accepts two inputs: an integer representing which action was chosen, and the current time step. It produces as output the reward for that action in that round (the rewards may change over time). You can see an example input where the reward of each action is a biased random number here:
For the part “Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret”. For the second term with summation, should it be added from 1 to T instead of 1 to K?
LikeLiked by 1 person
Can someone please explain the step \displaystyle \mathbb{E}(G_A(T)) = \mu_1 \mathbb{E}(P_1(T)) + \dots + \mu_K \mathbb{E}(P_K(T))
Thanks For the clear explanation!
Why “any round in which we play an action j, it must be that a(j, T+1) = a(j,T)” ?
I think a will decrease in any round we play an action j, as the equation shows:
a = a(j,T) = \sqrt{2 \log(T) / n_j}
By the Chernoff-Hoeffding inequality in this article, did you leave out the factor of two and the absolute value calculation, or is P(X-u > t) = P(X – u < t)?
They are not necessarily equal, but both are upper bounded by the same quantity (Hoeffding’s bound has a one-sided and a two-sided version)
Is it possible to extend the UCB algorithm to a situation where the payoff of all variables are known every round? Because I’ve been trying to work it out myself, and the algorithm as described here just devolves into taking the variable with highest average payoff, and I’m certain that can’t be the best method.
…this is the contextual bandit problem.
So, but tangential, but: I’ve been trying to apply the UCB algorithm as described here to cryptocurrency trading,using the same mechanism you describe in your stocks post, along with a few other alternate metrics, and I’ve discovered that in that context the strategy that works the best by far is to simply have your metric be the average of the last three payoffs of a certain currency, it outstrips the UCB algorithm by a factor of 2.
Now that seemed to indicate that the UCB algorithm is not the best algorithm to use in that cryptocurrency situation. I then tried to implement the Exp3 algorithm: Same result. Simply averaging the last three payoffs led to double the payoff of the Exp3 algorithm.
Now thus I have to ask: What other alternative algorithms exist for the bandit trading situation where all information is known? Simply taking the average works, but I’m sure there must be alternatives.
Check out this survey for a relatively exhaustive list of the different models and algorithms: .
One thing to note is that the analysis of these algorithms doesn’t take into consideration constant-factor fluctuations in regret; they’re primarily concerned with asymptotic performance as the number of possible actions and the number of rounds grows. So there could easily be a trivial tweak to UCB1/Exp3 that ends up doubling or tripling its payoff, but researchers would not care. As one proposal, try the UCB/Exp3 update using the average of 3 for each update. On top of that, the people who design these algorithms primarily care about what can be proved, and what insights can be gained from those proofs. Average–of-three might do better, but probably does not have any good guarantees. I personally think guarantees are important when most financial trading is about measuring and hedging risk.
It’s also important to note that cryptocurrency fluctuations are far from independent trials, as UCB posits, and probably not adversarial either.
If you have more detailed notes about your experiments, I’d be interested to read them.
I’ll see if I can do a write-up. But just for interest’s sake: The whole idea of the averaging the last three payoffs came from assuming all winning currencies are because of somebody pumping them, i.e. all crypto is just one massive pump and dump(you probably already know what a pump n dump is, it’s where you artificially deflate a currency by buying a lot of it so that you can sell it at an inflated price),, and therefore the best currencies to buy in are those that are going up steadily fastest, since that indicates that somebody is pumping that currency. That assumption would lead one to saying: max payoff previous round = max payoff next round? But that strategy gives worse payoff then UCB or EXP, which indicates that crypto has still an element of randomness. Therefore you need to average it out as to filter out randomnesss, but not so much that you can;t detect when a currency is rocketing. Therefore average of 3 is a good compromise.
So, I’ve been working through the survey, and part of the stochastic bandit chapter baffles me, namely: in the restatement of the pseudoregret why do they take the mean of the mean? And how did they get that from the original equation?
Stack Overflow question concerning the question^
So, no answer? Dangit, I’d really like to know :-{
Hi
Just stumbled across your code from wikipedia. I ran it and produced the expected results. My question is does the ucb1 method only show the regret is there a way of extracting the best action that contributed the most to the cummulative regret.
Thanks
Yes, you can see on this line of code that the ucb algorithm produces, at each step, the chosen actions, the reward/regret, and the current set of confidence bounds it assigns to each action:
At the end of the loop one could print out the ucbs list to see the final values for each action.
Hi Jeremy, I’ve implemented a simulation using your code as a base (). My major question is why you set BestAction = 0? Is this saying the 0th arm is assumed to be the best action, or does this have a more mathematical reasoning? I noticed that if I randomize the BestAction as int in {0:numActions}, then my regret turns negative (no ideal!)
Hi Matt,
You will notice that the “bestAction” part of the code is inside a test called “simpleTest”. There I set the reward-generating functions to be uniform random variables with a biased depending on the index.
biases = [1.0 / k for k in range(5,5+numActions)]
So the first index is the best action, because it has the highest bias (they taper off to 1 as the index increases). The point of this was not to seed the algorithm to know the best action ahead of time, but to compare UCB’s choices against the best algorithm in hindsight (which, in this test, is to always choose the first action). That is to illustrate the regret bounds and compare them to the theorem in this post. If you change which is the best, then the regret may change sign because you’re comparing UCB’s actions against suboptimal actions.
Hope that helps!
Jeremy, thanks for the detailed response. It makes sense that you have to set the biases so that there is an priori optimal action. However, this is not the case in many practical multi-arm bandit scenarios. Do you know of a way to predict regret without designation of biases per action?
If you get to observe payoffs that occur for actions you did not choose in retrospect (e.g., stock market price movements), then you can keep track of all payoffs for all actions at every time step, and compute the best single action at the end to get regret.
If you can’t view payoffs of unchosen actions (indeed, this is the point of explore/exploit), then you can’t measure regret. This is why the theorem is useful, because you know that this immeasurable quantity is bounded (assuming the model hypotheses hold, which they don’t in many cases).
Hello j2kun
Thanks for such an great post. I have a question. I was unable to understand why should a(j, T+1) = a(j,T) after playing arm j. As n_j increases by 1 at T+1 how are you pointing out that the ratio would remain the same?.
Statement:
“Second, the probability that our upper bound is wrong decreases in the number of rounds independently of how many times we’ve played the action. That is because our upper bound a(j, T) is getting bigger for actions we haven’t played; any round in which we play an action j, it must be that a(j, T+1) = a(j,T), although the empirical mean will likely change.” | https://jeremykun.com/2013/10/28/optimism-in-the-face-of-uncertainty-the-ucb1-algorithm/?shared=email&msg=fail | CC-MAIN-2021-49 | refinedweb | 5,777 | 60.14 |
Grokking Merkle Tree
Grokking Merkle Trees in Scala
There are quite a few literatures that talk about this data structure by people who, the same as I, came from their early introduction to the blockchain technology and bitcoin.
This article is not trying to replicate the effort as some posts were in-depth in their walkthrough on the usage and importance of Merkle Tree in the blockchain space and other use cases. This is the reference I used that inspired me to write this article.
In this post, we will write an implementation of this data structure using the Scala language. The approach I will take is first to first a)define the data types then (b)implement the algorithm for Merkle tree. The simplest representative of this data structure is a binary tree where the input data are used to generate our leaf nodes that contain the cryptographic hash of a dataset which will continue iterating to create branches until it reaches the last node that we will call the root.
Abstract Data Types
We will define the Node as a trait where other types will refer from. It will contain a type parameter that represents the data type of the hash value which is typically be represented as an
Array[Byte].
trait Node[T] { val hash: T }
The structure of the tree is composed of:
- a Leaf which is a node that does not have any children nodes and stores the hashed value of the input data.
case class Leaf[T](datum: T) extends Node[T]{ override val hash: T = ??? }
- a Branch which is the node that is produced as a product of the hash value of its children nodes.
case class Branch[T](left: Node[T], right: Option[Node[T]]) extends Node[T] { override val hash: T = ??? }
As seen above, we are missing the computation for producing the hash value. We needed to provide the mechanism to generate this hash value through defining the hash function. We will then add another parameter in the case class definition written above with this:
(implicit hashFn: (T, Option[T]) => T)
This will be used both on the Leaf and Branch nodes whilst the second value parameter in the function literal is left optional.
To complete the definition, let us rewrite them and add the logic for the computation as follows:
case class Leaf[T](datum: T)(implicit hashFn: (T, Option[T]) => T extends Node[T] { override val hash: T = hashFn(datum, None) } case class Branch[T](left: T, right: Option[Node[T]])(implicit hashFn: (T, Option[T]) => T) extends Node[T]{ override val hash: T = hashFn(left.hash, right.flatMap(r => Some(r.hash)) }
We now have completed the abstract data types for this data structure. The other remaining thing to do is to work on writing the algorithm for our Merkle Tree.
Implementing the Merkle Tree
The merkle tree will be defined as a class with a companion object to hold the construction of the tree.
class MerkleTree[T](val root: Node[T])
The gist of the algorithm will be written in its companion object. It will be composed of two parts:
- the constructor that accepts the input data of some type T and generates the leaf nodes that will store the hash value of the input data.
object MerkleTree { def apply[T](data: Seq[T])(implicit hashFn: (T,Option[T]) => T): MerkleTree[T] = ??? ... }
- - the recursive method that will keep on creating branches until a single node is left. This node will represent the root of the Merkle tree.
object MerkleTree { ... def build[T](nodes: Seq[Node[T]])(implicit hashFn: (T, Option[T]) => T): MerkleTree[T] = ??? }
We also need an outlet to plug in our function for generating the hash for our inputs which will then be used in the data types.
Define our constructor,
apply
When creating an instance of the Merkle tree, the apply method will be called. It requires data as input which is typically an array of bytes.
The apply method will use those input data to generate our leaf nodes which will store the hash content. It will look something like below:
def apply[T](data: Seq[T])(implicit hashFn: (T, Option[T]) => T): MerkleTree = { val withLeaves = data.map(Leaf(_)) build(withLeaves) }
After generating the leaves, it will then pass this to
build which will complete the construction of the Merkle tree.
Create our tree with the
build method
The build method completes the creation of our Merkle Tree where it will recursively call itself until a single node is left, the root.
Whilst in the process, the sequence of nodes are paired from leaves and internal nodes (i.e. branch) to instantiate our
Branch case class. The pair serves as an input to our branch to represent either the left or an optional right node.
def build[T](nodes: Seq[Node[T]])(implicit hashFn: (T, Optional[T]) => T): MerkleTree = { if(nodes.length == 1) new MerkleTree[T](nodes.head) else { val withBranches = nodes.grouped(2).map { case Seq(l,r) => Branch(l, Some(r)) case Seq(a) => Branch(a, None) }.toSeq build(withBranches) } }
How to use it?
The first thing to do is to define our cryptographic hash function implementation. For this, we will use the built-in MessageDigest utility available in the jdk with
SHA-256 encoding.
This function literal will be used and injected to the classes and methods that require it. One way of writing it will look like as follows:
implicit val hashFn: (Array[Byte], Option[Array[Byte]])=> Array[Byte] = (byteArr, opt) => { val cc = opt match { case None => byteArr case _ => byteArr ++ opt.get } java.security.MessageDigest.getInstance("SHA-256").digest(cc) }
To prepare, we can define a sequence of string as inputs. Since our hashFn expects an input of Array[Byte], we need to convert each items to its bytes equivalent.
It will be written as shown below.
val inputs = Seq("trans01", "trans02", "trans03").map(_.getBytes) val tree = MerkleTree(inputs)
For more details on how to use this, visit the test spec.
You can also find the source code for this article at this github link
What's next?
So far, we only built a data structure in the most simplest way we could. This will help us understand the components being used as well as the algorithm to complete it. This is a good introduction to understanding the Merkle tree but far enough for practical use.
A follow up article(s) will ensue on topics for applying Merkle proofs and further improvements. | https://www.works-hub.com/learn/grokking-merkle-tree-bccc4 | CC-MAIN-2021-21 | refinedweb | 1,086 | 69.92 |
An In-Depth Look At C++ Including Standard Libraries, Uses, And Other Features.
C++ is an object-oriented programming language. But the truth is that C++ also supports procedural and generic programming.
It can be considered as a middle-level language as it has the features of a high-level language as well as lower-level language. This, in turn, makes C++ the best for real-time applications as well as low-level applications like system programming.
Read through this Entire C++ Training Series for a complete understanding of the concept.
Initially, C++ was developed as an enhancement to C language and was introduced by Bjarne Stroustrup at Bell Labs in 1979. At that time it was named “C with Classes”. Later on, in 1983, it was renamed as C++.
As C++ is a superset of C, it supports almost all the features of C language and hence any program in C language is also a C++ program.
What You Will Learn:
Object-Oriented Programming
C++ supports all the features of object-oriented programming like:
- Inheritance
- Polymorphism
- Encapsulation
- Abstraction
Standard Libraries
Like all other programming languages, C++ language also has all the core structures like variables, constants, data types, etc.
Standard C++ library also has a rich set of features that support various manipulating operations, string operations, array manipulations, etc. In addition, the standard template library (STL), gives rich features to manipulate data structures or container classes.
C++ Introduction
In a nutshell, C++ is a strongly or statically typed, a general-purpose, case-sensitive, compiled language which is a free-form programming language.
Apart from these, it also supports object-oriented programming features and also a lot many other features like STL which make it a prominent language. Most of the C++ compilers supports ANSI standard which ensures that C++ is portable.
Uses of C++
C++ can be used to program a variety of applications in almost every application domain.
In fact, the primary User interfaces of the Windows operating system and Macintosh operating systems are also written in C++.
C++ is majorly used in writing device drivers and other low-level system programming applications which require hardware manipulations.
First C++ Program
So what a basic C++ program looks like?
Let’s see a simple example to print a string of characters to the console.
The source code or simply code (a set of programming instructions) written in C++ will look like:
#include <iostream.h> using namespace std; int main() { cout<<”Hello,World!! This is C++ Tutorial!!\n”; cin.get(); return 0; }
Now let’s read this program statement by statement.
The first line “#include<iostream.h>” is a directive to the compiler to include a library of I/O functions of C++, iostream.h. The #include directive is used to include external libraries that will be used in programming.
Using iostream.h file, we can write programs to input-output data and information in C++.
The next line using namespace std; is a command to include standard namespace std into the program. The namespace is similar to a package or a library that includes library functions as well.
After this, we have a function definition, int main(). All C++ program have a single entry point i.e. main() function. The return type of the main function is an integer.
The next statement “{“is the opening brace and it indicates the start of the block of code. After this, we will have a series of statements that serve our purpose (in this case, the printing of string). Once the code is finished, we close the function block with the closing brace “}”.
Every function in C++ should have these opening and closing braces to indicate the start and end of the code block.
After the opening brace, we have another statement, cout<<” Hello, World!! This is C++ Tutorial!! \n”;
This statement prints the statement “Hello, World!! This is C++ Tutorial!!” to the console. The function we use to print the string in C++ is “cout”(spelled as C Out) which is a part of the header file “iostream.h” that we included at the beginning of the code.
The function call ‘cout’ followed by ‘<<’ is called the insertion operator in C++. This operator is used to output the contents to the standard output device.
The next statement cin.get(); is yet another function call which is a part of “iostream.h”. ‘cin’ is the function call to read input from a standard input device like a keyboard.
In our C++ program, cin calls the get() function. This is similar to “getch()’ function in C which gives time for the user to read the console output. ‘cin’ followed by ‘>>’ is called the extraction operator in C++ and is used to read input from the standard input device.
Next statement in the code returns 0;
This is the signal to the compiler that the function code has ended and control can now return to the start of the main function. As the main function returns int value, we have to return a numeric value (in this case 0). In C++, returning 0 indicates success.
Thus this is the basic C++ program that we presented for the users to understand the basic syntax of C++ program.
Having understood this, the next question that naturally comes to our mind is who should learn C++? What are the prerequisites of learning C++?
Ideally, anyone can learn C++. There are no hard and fast set rules that tell who can learn C++.
Anyone interested in programming or with a desire to make it big in the programming world can go for C++. C++ is easy to learn but at times it can be tricky. However, by practicing and reading hard, anyone can master the language.
Though it’s vast and has a lot of concepts to be acquired, we feel once we understand these concepts only then it takes more and more practicing before you can master the language.
Pre-requisites Of Learning C++
Although this tutorial will begin with the most basic concepts of C++, we still feel it’s necessary that the users taking up to learn C++ must have basic knowledge of Computers and should be well aware of computer fundamentals and basic programming terms.
Other than these prerequisites, anyone can learn C++. Even people who have been using other programming languages can make a switch to C++ anytime.
Advantages Of Knowing C++
The major advantage of learning C++ is its vast usage in almost every field. C++ is practically irreplaceable. No other language can do each and everything that we can do with C++, though many languages have acquired few features of C++ from time to time.
C++ is used in low-level programming, so when given a chance, you can actually work and get to know the compiler and other low-level stuff by using C++. C++ programmers have more scope in the software world and in turn fetch higher salaries than the rest.
Conclusion
With all these advantages, you can just take a leap and start with our C++ tutorials.
Going forward, we will brief you all the concepts in C++ in detail so that everyone, right from a novice programmer to experienced can master this wonderful language easily.
=> Take A Look At The C++ Beginners Guide Here | https://www.softwaretestinghelp.com/overview-cpp/ | CC-MAIN-2021-17 | refinedweb | 1,217 | 64.2 |
Announcing Persistent 2
August 29, 2014
Greg Weber
We are happy to announce the release of persistent 2.0
persistent 2.0 adds a flexible key type and makes some breaking changes. 2.0 is an unstable release that we want your feedback on for the soon to follow stable 2.1 release.
New Features
- type-safe composite primary and foreign keys
- added an upsert operation (update or insert)
- added an insertMany_ operation
Fixes
- An
Idsuffix is no longer automatically assumed to be a Persistent type
- JSON serialization * MongoDB ids no longer have a prefix 'o' character.
Breaking changes
- Use a simple ReaderT for the underlying connection
- fix postgreSQL timezone storage
- remove the type parameter from EntityDef and FieldDef
In depth
Composite keys
The biggest limitation of data modeling with persistent is an assumption of a simple (for current SQL backends an auto-increment) primary key. We learned from Groundhog that a more flexible primary key type is possible. Persistent adds a similar flexible key type while maintaining its existing invariant that a Key is tied to a particular table.
To understand the changes to the
Key data type, lets look at a change in the test suite for persistent 2.
i <- liftIO $ randomRIO (0, 10000) - let k = Key $ PersistInt64 $ abs i + let k = PersonKey $ SqlBackendKey $ abs i
Previously
Key contained a
PersistValue. This was not type safe.
PersistValue is meant to serialize any basic Haskell type to the database, but a given table only allows specific values as the key.
Now we generate the
PersonKey data constructor which specifies the Haskell key types.
SqlBackendKey is the default key type for SQL backends.
Now lets look at code from CompositeTest.hs
mkPersist sqlSettings [persistLowerCase| Parent name String maxlen=20 name2 String maxlen=20 age Int Primary name name2 age deriving Show Eq Child name String maxlen=20 name2 String maxlen=20 age Int Foreign Parent fkparent name name2 age deriving Show Eq |]
Here Parent has a composite primary key made up of 3 fields. Child uses that as a foreign key. The primary key of Child is the default key for the backend.
let parent = Parent "a1" "b1" 11 let child = Child "a1" "b1" 11 kp <- insert parent _ <- insert child testChildFkparent child @== parent
Future changes
Short-term improvements
Before the 2.1 release I would like to look at doing some simple things to speed up model compilation a little bit.
- Speed up some of the compile-time persistent code (there is a lot of obviously naive code).
- Reduce the size of Template Haskell generation (create a reference for each EntityDef and some other things rather than potentially repeatedly inlining it)
Medium-term improvement: better support for Haskell data types
We want to add better support for modeling ADTs, particularly for MongoDB where this is actually very easy to do in the database itself. Persistent already support a top-level entity Sum Type and a simple field ADT that is just an enumeration.
Another pain point is serializing types not declared in the schema. The declaration syntax in groundhog is very verbose but allows for this. So one possibility would be to allow the current DRY persistent declaration style and also a groundhog declaration style.
Long-term improvements: Projections
It would be possible to add projections now as groundhog or esqueleto have done. However, the result is not as end-user friendly as we would like. When the record namespace issue is dealt with in the GHC 7.10 release we plan on adding projections to persistent.
Ongoing: Database specific functionality
We always look forward to see more databases adapters for persistent. In the last year, a Redis and ODBC adapter were added.
Every database is different though, and you also want to take advantage of your database-specific features. esqueleto and persistent-mongoDB have shown how to build database specific features in a type-safe way on top of persistent.
Organization
Although the persistent code has no dependency on Yesod, I would like to make the infrastructure a little more independent of yesod. The first steps would be
- putting it under a different organization on github.
- having a separate mail list (should stackoverflow be prioritized over e-mail?) | http://www.yesodweb.com/blog/2014/08/announcing-persistent-2 | CC-MAIN-2015-06 | refinedweb | 700 | 52.9 |
Hey, Scripting Guy! I’m trying to write a script that does something with all the email messages in my Outlook Inbox. However, I need to sort those messages by the date they were received, and I can’t figure out how to do that. Can you help?
-- SL
Hey, SL. You know, no doubt a lot of you who read this column have thought, “The Scripting Guy who writes that column: I bet he’s a Red. He just seems like a Red to me.” Well, we have good news for you: you were absolutely right, the Scripting Guy who writes this column is a Red. Or, to be a little more precise, red and yellow are his dominant energies, with green and blue his inclined energies.
And yes, now that you see it in print it all seems so obvious, doesn’t it?
So what does all this dominant and inclined energy stuff mean? Well, to be honest, the Scripting Guy who writes this column has no idea. Recently the Scripting Guys’ larger team took part in an Insights Discovery session put on by Insights. Unfortunately, the Scripting Guy who writes this column had an important meeting with TechNet that same day and was thus unable to attend the session. Because he didn’t make the session he knows only that he’s a Fiery Red (as opposed to a Cool Blue, Earth Green, or Sunshine Yellow); for better or worse, he has no idea what that actually means.
In addition to being a Fiery Red, the Scripting Guy who writes this column is a Motivator (Orange) on the 72 Type Wheel System. And before you ask, no, he doesn’t know what that means, either. He does know, however, that the 72 Type Wheel system has 72 discrete sections that offer “an infinite number of colour combinations to represent the uniqueness of each individual.” Admittedly, the Scripting Guy who writes this column isn’t totally sure how you can take Red, Blue, Yellow, and Green and create an infinite number of color combinations. But, then again, Fiery Reds aren’t really known for their math skills. We tend to be people persons, if you know what we mean.
Speaking of people, it turns out that Scripting Guy Jean Ross is a Blue, with blue and red representing her dominant energies and green and yellow representing her inclined energies.
Oh, come on, don’t laugh: she can’t help being who she is.
Meanwhile, Scripting Guy Dean Tsaltas is a Burnt Umber, which is noteworthy for the fact that Burnt Umber isn’t even one of the colors that you can be. “We don’t understand it, either,” said Ken Myer, spokesperson for Insights. “But every time we ran his profile it came out Burnt Umber.”
Go figure.
At any rate, according to the brochure a Fiery Red is “Competitive, demanding, determined, strong-willed, and purposeful.” As you can see, these are all outstanding personality traits, and the Scripting Guy who writes this column is glad to see scientific acknowledgment of the fact that he’s just one heck of a guy. By contrast, a Cool Blue, like Scripting Guy Jean Ross, is “dull and boring, and should always defer to a Fiery Red when it comes to work-related issues.”
Or at least we assume that’s what a Cool Blue is. We didn’t really have time to read the entire brochure.
Incidentally, Fiery Reds are also known for their ability to write scripts that can sort the items found in a Microsoft Outlook folder. You know, scripts like the following, which returns a collection of items found in the Inbox, sorted (in descending order) by the date that the item was received:
On Error Resume Next Const olFolderInbox = 6 Set objOutlook = CreateObject("Outlook.Application") Set objNamespace = objOutlook.GetNamespace("MAPI") Set objFolder = objNamespace.GetDefaultFolder(olFolderInbox) Set colItems = objFolder.Items colItems.Sort "ReceivedTime", True For Each objItem in colItems Wscript.Echo objItem.ReceivedTime Next
So do we have any idea how this script works? Tut-tut; did you forget that the Scripting Guy who writes this column is a Fiery Red? To begin with, we define a constant named olFolderInbox and set the value of that constant to 6; we’ll use olFolderInbox to tell the script which Outlook folder we want to work with. After defining the constant we create an instance of the Outlook.Application object, then use these two lines of code to bind to the MAPI namespace (the only namespace we can bind to) and then to the Inbox folder:
Set objNamespace = objOutlook.GetNamespace("MAPI") Set objFolder = objNamespace.GetDefaultFolder(olFolderInbox)
As soon as we’ve made a connection to the Inbox we can go ahead and retrieve a collection of all the mail messages found there; that can be done simply by creating an object reference to the folder’s Items collection:
Set colItems = objFolder.Items
And now comes the really cool part:
colItems.Sort "ReceivedTime", True
Uh, actually, that was the really cool part, the part where we sort the returned collection – in descending order – by the date that the message was received. To do that we simply call the Sort method followed by two parameters: 1) ReceivedTime, the name of the property we want to sort by; and, 2) True, which tells the script to sort the collection in descending order. What if we wanted to sort the collection in ascending order? No problem; just leave off the second parameter:
colItems.Sort "ReceivedTime"
And what if we wanted to sort the collection, in descending order, by, say, the Subject line? That’s fine; just sort by the Subject property, and include that optional second parameter:
colItems.Sort "Subject", True
Etc., etc.
All we have to do now is see if this actually works. To do that, we set up a For Each loop designed to walk us through each item in the Inbox collection:
For Each objItem in colItems
And what can we do inside this loop? Pretty much anything we want. For today’s sample script, all we do is echo back the value of each item’s ReceivedTime property:
Wscript.Echo objItem.ReceivedTime
That results in output similar to this:
4/1/2008 8:11:23 AM 4/1/2008 8:00:12 AM 4/1/2008 7:59:16 AM 4/1/2008 7:44:47 AM 4/1/2008 7:43:26 AM 4/1/2008 7:33:39 AM 3/31/2008 4:17:42 PM 3/31/2008 12:55:12 PM
Pretty cool, huh? And, again, to sort in ascending order just leave off the second parameter.
So, in retrospect, does the Scripting Guy who writes this column wish that he had been able to make it to the Insights Discovery session? To be honest, no; for him it would have been a waste of time. After all, Insights Discovery is designed to help you become a better teammate, to help you better communicate with your co-workers, to help you learn to value and respect other people and their opinions. Needless to say, the Scripting Guy who writes this column has already mastered all of these things; no one is more sensitive to the needs and feelings if his co-workers than he is.
And if you don’t believe that, just ask the Scripting Editor; she’ll tell you what a kind-hearted and sensitive soul he truly is. Although you might have to speak slowly and avoid using big words; after all, she is a Cool Blue, you know.
Just like we said: sensitive to the unique needs of his co-workers.
thanks
Does “ReceivedTime” reflect the time when the user’s mailbox server (Exchange) received the email or when the user’s Outlook (running on the workstation in Cachaed Mode) received the email?
I am trying to find out how to make my searches in Outlook return only the surch criteria requested. I don’t need to see anything else when I am searching. Thoughts? | https://blogs.technet.microsoft.com/heyscriptingguy/2008/04/03/hey-scripting-guy-how-can-i-sort-items-retrieved-from-a-microsoft-outlook-folder/ | CC-MAIN-2018-05 | refinedweb | 1,345 | 69.62 |
Subject: Re: [boost] Is there interest in an "iofiber" library?
From: Oliver Kowalke (oliver.kowalke_at_[hidden])
Date: 2019-01-06 07:46:58
Am Sa., 5. Jan. 2019 um 23:54 Uhr schrieb David Sankel via Boost <
boost_at_[hidden]>:
> > - The dependency on C++11 (e.g. fiber objects are move-only handles
> > just like thread objects). ASIO is a C++98 library.
> > - Scope-creep within ASIO?
> >
> >
> It seems like it would be a good idea to broach the subject with Chris to
> see what his thoughts are. It seems like this is a replacement for
> something that already exists in ASIO. He may be happy to see this live in
> another library.
>
If boost.asio would be C++11, the lib could simply migrate from boost.
coroutine to boost.coroutine2 by changing the coro namespace.
I guess that a separate library would make more sense.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2019/01/244826.php | CC-MAIN-2022-27 | refinedweb | 167 | 88.63 |
you must be :
"addAttr -ln "noNormals" -at long -dv 0 |group1|pTorus1|pTorusShape1;"
mentalray need the attributes "noNormals"=0.
like a bug in maya .but it run.
you must be :
"addAttr -ln "noNormals" -at long -dv 0 |group1|pTorus1|pTorusShape1;"
mentalray need the attributes "noNormals"=0.
like a bug in maya .but it run.
I had compiled common shader for custom passes,download and install it.
complete examples and documentations
Link:
Password:by48B5
mill is death~~
I'm so sad.:(
1.right botton in mill ,"create scene element->point light".
2.select and export your network as a ".cgfx " file.
3.in maya,create a point light,create a cgfx surface shader,import the cgfx file....
cool!
new version, iray is working by mental mill?
2990
I disable "reflect",output images of depth is correct.
but output images of dgs material shader had no contain a reflective composition.I still did not solve the problem.
...
thank you for reply!
disable the reflection option, black dots is clear away.
I use ctrl_buffer buffer_store shader to produced,the results are correct.
where the problem in my code?
I had compiled a shader, it write framebuffer to disk.
like this:
#include <stdio.h>
#include <math.h>
#include <shader.h>
#include <mi_shader_if.h>
#include "com_aux.h"
I very much look forward to seeing the new metaSL output shaders will be successfully developed. We almost can hardly wait to use them at work.;)
Thanks again for your help!
If...
Thanks!
I made some low-level beginner's error.
Fixed mi file, the scene can work. However, the performance mentalray very low, and still had the error message.
......
PHEN 0.14 error:...
I export a scene to the mentalray standalone rendering , the software reported the following error:
API 0.0 error 301184: D:/Program Files/mental images/metaSL/mia_material_x.mi, line 14: too...
hi willanie
Without your permission, I wrote a mi file for your shader, so maya2011 users can also use it to work, hope you can forgive.
It works well, very fast.:p
hi, JanJordan
I made a mi file to mia_material_x.msl (downloaded from material.mentalimages.com), and put them into MI_RAY_INCPATH and MI_CUSTOM_SHADER_PATH the directory.
Then I found a node in...
When you install mentalray standalone 3.8/maya2011/mental mill 1.1, you must be disconnect from the Internet. Installation sequence is mentalray standalone 3.8-> maya2011-> mental mill 1.1. After...
I found the wrong, I used a "math.h" that from vs2008,but not mentalray.
Hello everybody,
I created the following shader, compiled it in vs2008 there. But it's "inputDis" argument does not work after connecting the texture, and why?
Only adjust the color, this... | https://forum.nvidia-arc.com/search.php?s=d7ce6f7df4a30c5110b825a87cf9478e&searchid=1270806 | CC-MAIN-2019-26 | refinedweb | 447 | 71.82 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.