text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
These notes present the basics of Java in outline form. They only discuss the ways in which Java differs from C++. If something is not discussed, such as for loops, if-then-else statements, or constructors, it is because they are identical in both C++ and Java.
class HelloWorld { public HelloWorld() { System.out.println("Hello World"); } static public void main(String args[]) { new HelloWorld(); } }
javac HelloWorld.java
java HelloWorld
int sum(int x, int y) { return x + y; } int multiply(int x, int y) { return x * y; } int main(int argc, char *argv[]) { sum(3,6); multiply(9,5); }In Java, a comparable program should be written as:
class Arithmetic { public Arithmetic() { sum(3,6); multiply(9,5); } int sum(int x, int y) { return x + y; } int multiply(int x, int y) { return x * y; } public static void main(String args[]) { new Arithmetic(); } }A lot of novice Java programmers think this code is too complicated and try to write it more simply as:
class Arithmetic { int sum(...) int multiply(...) public static void main(String args[]) { sum(3,6); multiply(9,5); } }The java compiler will complain about this code though because sum and multiply have not been declared static as well. If you call a function from a static function, then the called function must also be declared static. The reason is that non-static functions expect a pointer to an object to be passed to them via the this variable. However, static functions like main don't have objects associated with them and hence cannot pass a reference to an object to a non-static function. Trying to keep track of all the function calls made from a static function becomes tiresome, and hence it is best to simply move the code that typically appears in main to a constructor instead, and then invoke that code by having main create a single instance of the class.
int countPositive(int [] a) { int count = 0; int i, j; for (i = 0; i < a.length; i++) { int j = a[i]; if (j > 0) count++; } j = a.length - count; System.out.println("number of negatives = " + j); return count; }
int countPositive(...) { ... for (int i = 0; i < 10; i++) { ... } for (int i = 20; i > 10; i--) { ... } }
int a; double b; b = a; // okay because there is no loss of precision
a = b; // not okay because the fractional portion will be lost when // b's value is assigned to a
a = (int) b; // okay. Java will truncate the fractional part of b's // value and assign it to a
b instanceof Stack b instanceof LinkedList b instanceof Objectare all true. A way of seeing why this makes sense, is that you typically ask if an object is an instance of something because you want to either cast to that type, or because you want to use a certain method. It is always safe to lose "precision" by casting to a superclass object, which is a less precise object.
Example: String a = "Hello"; a =.
String token; String day = "Wednesday"; ... if (token.equals("+")) ... else if (token.equals(day)) ...
e.g., int a []; int [] b; int c [] = new int[6]; int [] d = new int[10];
String weekdays [] = {"Monday", "Tuesday", "Wednesday", "Thursday", "Friday"};
e.g., System.out.println(c.length) will print 6
e.g., int a [][] = new int[4][5]; int b [][] = new int[4][]; b[0] = new int[5]; b[1] = new int[4]; ...
e.g., int[] arrayOfInts = { 32, 87, 3, 589, 12, 1076, 2000, 8, 622 }; int sum = 0; for (int element : arrayOfInts) { sum += element; }You must declare element in the for statement. If you try to declare element outside the for statement, you will get a compiler error message.
class List { public void insertAfter(Object referenceObj, Object objToInsert); public Object getHead(); public Object getTail(); ... }
Stack a; Stack b = new Stack(); b.push(3); a = b; a.push(5); // b's stack now contains both 3 and 5
Example: class Stack { int size = 10; int top = 0; ... }
class RandomValues { int randomNums[] = new int[10]; // Initialization block { for(int i=0; i<randomNums.length; i++) { randomNums[i] = (int)(100.0*Math.random()); } } }
Example: static String color = "red";
class Point { public double x; public double y; }
e.g., class foo { ... public class goo { ... } }foo.goo a = new foo.goo() will fail because a goo can only be created by an instance of a foo
final double SALES_TAX = 0.05;
void swap(int &a, int &b) { int temp = a; a = b; b = temp; } actual call: swap(x, y);
void swap(*int a, int *b) { int temp = *a; *a = *b; *b = temp; } actual call: swap(&x, &y)As you can see, there are many more opportunities to introduce an error when you have to explicitly take the address of arguments and dereference parameters. | http://web.eecs.utk.edu/~bvz/teaching/cs365Sp17/notes/java-basics.html | CC-MAIN-2017-51 | refinedweb | 793 | 61.87 |
if { 0 } { This text is not seen by Tcl (well, unless you use braces in it) and you can format it according to the Wiki rules (or anything you want to try yourself) }RS created the Wiki page DOS BAT magic that says it all.escargo: How would we distinguish between the script's explanation and other text that an author (or developer) decides should be ignored? Perhaps a convention about a token that would flag the contents as explanation.
For standalone scripts, it is natural that the toplevel "." belongs to them, so the framework would make its own toplevel, and clear "." before running the code.Thinking even further, perhaps incorporating from TclPro or Tuba to animate/step thru the code will enhance the value of the framework as a teaching tool (anyone remember the original Java example demonstrating various sorting algorithms?) --- stevelLV Take a look at the demo frameworks used in tk, itcl, BWidgets, and the source for Effective Tcl/Tk - these give you some currently existing frameworks from which you might be able to work. Most, if not all, of these provide the explain/source/execute model. Or grab
bryan oakley sez... Perhaps a simple API is called for. Tk could create a namespace named Demo, with child namespaces named after the packages. So, tk's demo would be in ::Demo::Tk, BLT's in ::Demo::BLT, etc. That, or reverse it so that every package has a Demo namespace below it (::tk::Demo, ::BLT::demo, etc).Once done, the namespace could make available two or three commands, such as:
::Demo::tk::show pathName builds a demo rooted in path$name ::Demo::tk::info description returns the description of the demo ::Demo::tk::info code returns the code (trivially implemented as ::info body ::Demo::tk::show, perhaps) ::Demo::tk::info title returns the demo "title", suitable for use as a frame title or item in a hierarchical listbox or popup or whatever.The demo application, then, would be fairly trivial. It could gather up all namespaces under ::Demo, create a listbox based on the results of ::Demo::$whatever::info title, and put in a mechanism so that when an item is selected, the contents of another window are updated to include the rendered demonstration and/or the code.Thus, any package that conforms to the official demo API will automagically be made available by the demo application without any extra demonstration steps.This API could be put in a separate package so that it doesn't get loaded unless necessary. So a package index file might have two entries per package:
package ifneeded myCleverPackage 1.0 ... package ifneeded myCleverPackage-demo 1.0 ... | http://wiki.tcl.tk/3260 | CC-MAIN-2018-05 | refinedweb | 444 | 51.68 |
It is our firm belief that software can only be successful if it is properly documented. Too many academic software projects die prematurely once their creators leave the university or the workgroup in which the software was developed, since with their creators also knowledge of internal structures, interfaces, and the valuable bag of tricks leaves, a gap that can not be closed by reading sources, trial-and-error, and guessing.
The deal.II project has therefore from its infancy adopted a policy that every aspect of the interface needs to be well-documented before its inclusion into the source tree. Since we have found that it is impossible to keep documentation up-to-date if it is not written directly into the program code, we write the documentation directly at the place of declaration of a function or class and use automatic tools to extract this information from the files and process it into HTML for our web-pages, or LaTeX for printing.
In addition to the API documentation, we maintain a series of well-documented example programs, which also follow a certain ``literate programming'' style in that the explanations of what is happening are integrated into the program source by means of comments, and are extracted by small scripts.
This document first explains the basics of documenting the API and then of writing example programs.
In order to extract documentation from the header files of the project, we use doxygen. It requires that documentation is written in a form which closely follows the JavaDoc standard.
Basically, every declaration, whether class or member function/variable declaration, global function or namespace, may be preceded by a comment of the following form:
/** * This is an example documentation. * * @author Wolfgang Bangerth, 2000 */ class TestClass { public: /** * Constructor */ TestClass (); /** * Example function */ virtual void test () const = 0; /** * Member variable */ const unsigned int abc; };
doxygen will then generate a page for the class
TestClass and document each of the member functions
and variables. The content of the
@author tag will be
included into the online documentation of the class.
In order to allow better structured output for long comments, doxygen supports a great number of tags for enumerations, sectioning, markup, and other fields. We encourage you to take a look at the doxygen webpage to get an overview. However, here is a brief summary of the most often used features:
/** * <ul> * <li> foo * <li> bar * </ul> */you can get itemized lists both in the online and printed documentation:
In other words, one can use standard HTML tags for this
task. Likewise, you can get numbered lists by using the
respective HTML tags
<ol>.
If you write comments like this,
/** * @verbatim * void foobar () * { * i = 0; * } * @endverbatim */you will get the lines between the verbatim environment with the same formatting and in typewriter font:
void foobar () { i = 0; }This is useful if you want to include small sample code snippets into your documentation. In particular, it is important that the formatting is preserved, which is not the case for all other text.
In order to use typewriter font for instance for function
arguments or variables, use the
<code> HTML
tag. For a single word, you can also use the form
@p
one_word_without_spaces. The
<tt> is obsolete in HTML5
If you refer to member variables and member functions doxygen has better options than this: use function_name() to reference member functions and #variable_name for member variables to create links automatically. Refer to the documentation of doxygen to get even more options for global variables.
To generate output in italics, use the
@em
one_word_without_spaces tag or the <em>
HTML tag. To generate boldface, use <b>
For simple and short formulæ use the <i> HTML tag. Note that you can use <sub> and <sup> to get subscripts an superscripts, respectively. Only for longer formulæ use $formula$ to generate a LaTeX formula which will then be included as a graphical image.
Sections in class and function documentations can be generated using the <hN> HTML headline tags. Headlines inside class documentation should start at level 3 (<h3>) to stay consistent with the structure of the doxygen output.
Sections cannot be referenced, unless you add a <A NAME="..."> name anchor to them. If you really have to do this, please make sure the name does not interfere with doxygen generated anchors.
doxygen sometimes has problems with inlined functions of template classes. For these cases (and other cases of parts of the code to be excluded from documentation), we define a preprocessor symbol DOXYGEN when running doxygen. Therefore, the following template can be used to avoid documentation:
/* documented code here */ #ifndef DOXYGEN /* code here is compiled, but ignored by doxygen */ #endif // DOXYGEN
Writing example files for classes is supported by
doxygen. These example files go into
deal.II/examples/doxygen. If they are short,
documentation should be inside and they are included into the
documentation with
@include filename. Take a look how
the class
BlockMatrixArray does this.
Larger example files should be documented using the
doxygen command
@dotinclude and
related commands. However, if these programs do something
reasonable and do not only demonstrate a single topic, you should
consider converting them to a complete example program in the
step-XX series.
Tutorial programs consist of an introduction, a well documented
code, and a section that shows the output and numerical results
of running the program. These three parts are written in separate
files: for the
step-xx program, for example, they
would be in the
files
examples/doc/step-xx/doc/intro.dox,
examples/doc/step-xx/step-xx.cc and
examples/doc/step-xx/doc/results.dox. There are a
number of scripts that then process and concatenate these three
different files and send them through doxygen for generation of
HTML output. In general, if you want to see how certain markup
features can be used, it is worthwhile looking at the existing
tutorial program pages and the files they are generated from.
The introduction, as well as the results section, will be processed as if they were doxygen comments. In other words, all the usual doxygen markup will work in these sections, including latex formulas, though the format for the formula environment is a bit awkward. Since it takes much longer to run doxygen for all of deal.II than to run latex, most of the lengthier introductions are just written in latex (with a minimal amount of markup) and later converted into doxygen format. One thing to be aware of is that you can reference formulas in doxygen, so you have to work around that using text rather than formula numbers.
More important is what goes into the introduction. Typically, this would first be a statement of the problem that we want to solve. Take a look, for example, at the step-22 or step-31 tutorial programs. Then come a few sections in which we would discuss in mathematical terms the algorithms that we want to use; this could, for example, include the time stepping, discretization, or solver approaches. step-22 and step-31 are again good, if lengthy, examples for this.
On the other hand, if a program is an extension of a previous program, these things need not be repeated: you would just reference the previous program. For example, step-16 does not talk about adaptive meshes any more — it extends step-6 and simply refers there for details. Likewise, step-32 simply refers to step-31 for the problem statement and basic algorithm and simply focuses on those parts that are new compared to step-31.
The purpose of the introduction is to explain what the program is doing. It should set the mindset so that when you read through the code you already know why we are doing something. You may not yet know how this done, but this is what the documentation within the code is doing. At least you don't have to wonder any more why we are building up this complicated preconditioner — we've already discussed this in the introduction.
If it helps the understanding, the introduction can refer to particular pieces of code (but doesn't have to). For example, the introduction to step-20 has pretty lengthy code snippets that explain how to implement a general interface of operators that may or may not be matrices. This would be awkward to do within the code since in the code the view is somewhat smaller (you have to have complete parameter lists, follow the syntax of the programming language, etc, all of which obscures the things one wants to discuss when giving a broad overview related to particular C++ constructs). On the other hand, showing code snippets in the introduction risks duplicating code in two places, which will eventually get out of synch. Consequently, this instrument should only be used sparingly.
At present, the tools that extract information from the actual example
programs code are rather dumb. They are, to be precise, three Perl
scripts located in the directory of the
deal.II/doc/doxygen/tutorial tree, where
the
.cc files of the tutorial programs are converted
into doxygen input files. In essence, what these scripts do is to
create doxygen input that contains the comments of the program as
text, and the actual code of the programs as code snippets. You
can see this when you look at the pages for each of the tutorials
where the code is indented relative to the text.
The whole thing being interpreted by doxygen means that you can put anything doxygen understands into comments. This includes, for example references to classes or members in the library (in fact, you just need to write their name out and doxygen will automatically link them), formulas, lists, etc. It all will come out as if you had written comments for doxygen in the first place.
The bigger question is how to write the comments that explain what's going on in individual code blocks. Many years back we wrote them so that every line or every two lines had their own comment. You can still see this in some of the older tutorial programs, though many of them have in the meantime been converted to a newer style: it turns out that if you have comments so frequently, it becomes hard to follow the flow of an algorithm. In essence, you know exactly what each line does, but you can't get an overview of what the function as a whole does. But that's exactly the point of the tutorial programs, of course!
So the way we now believe tutorial programs should be written is
to have comments for each logical block. For example,
the
solve() function in many of the programs is
relatively straightforward and has at most a dozen lines of
code. So put a comment in front of the function that explains
all the things that are going on in the function, and then show
the function without comments in it — this way, a reader
will read through the half or full page of documentation
understanding the big picture, and can then see the whole
function all at once on a single screen without having to scroll
up and down. In the old way, the code would be spread out over a
couple pages, with comments between almost any two lines, making
it hard to see how it all fits together.
It is somewhat subjective how much code you should leave in each block that you document separately. It might be a single line if something really important and difficult happens there, but most of the time it's probably more along the lines of 6 to 12 lines — a small enough part of the code so that it's easy enough to grasp by looking at it all at once, but large enough that it contributes a significant part or all of an algorithm.
The results section should show (some of) the output of a program, such as the console output and/or a visualization of graphical output. It should also contain a brief discussion of this output. It is intended to demonstrate what the program does, so that a reader can see what happens if the program were executed without actually running it. It helps to show a few nice graphics there.
This section needs not be overly comprehensive. If the program is the implementation of a method that's discussed in an accompanying paper, it's entirely ok to say "for further numerical results, see ...".
Like the introduction, the results section file is copied verbatim into input for doxygen, so all doxygen markup is possible there. | https://www.dealii.org/developer/developers/writing-documentation.html | CC-MAIN-2019-39 | refinedweb | 2,104 | 57.4 |
Managed Extensibility Framework (MEF):.
First, you have Program import a calculator. This allows the separation of user interface concerns, such as the console input and output that will go into Program, from the logic of the calculator.
Add the following code to the Program class::
Now that you have defined ICalculator, you need a class that implements it. Add the following class to the module or SimpleCalculator namespace::
This code simply reads a line of input and calls the Calculate function of ICalculator on the result, which it writes back to the console. That is all the code you need in Program. All the rest of the work will happen in the parts. the MySimpleCalculator class::
In this case, the metadata for each operation is the symbol that represents that operation, such as +, -, *, and so on. To make the addition operation available, add the following class to the module or SimpleCalculator namespace:.)
With these parts in place, all that remains is the calculator logic itself. Add the following code in the MySimpleCalculator class to implement the Calculate method::.:
Note that in order for the contract to match, the ExportAttribute attribute must have the same type as the ImportAttribute.
Compile and run the project. Test the new Mod (%) operator..
To download the complete code for this example, see the SimpleCalculator sample.
For more information and code examples, see Managed Extensibility Framework. For a list of the MEF types, see the System.ComponentModel.Composition namespace. | https://msdn.microsoft.com/en-us/library/dd460648.aspx | CC-MAIN-2015-32 | refinedweb | 243 | 57.06 |
Calling Bevel tool and setting parametrs
Just when I thought I might be getting the hang of python in C4D I ran into a challenge of trying to bevel an edge selection. I've been able to piece together most of what I think I need but I've run into two problems. The first being that I can't set the offset of the tool and second I must be missing some sort of command to have the polygons show up after the resulting bevel takes place. I've been scouring the forums and trying a bunch of things for the last few hours and I've come to the point where I have to throw in the towel and seek professional help.
Any and all help is appreciated.
Here's what I have so far -
tags = op.GetTags() for tag in tags: if tag.GetType()==5701: #edge selection doc.SetMode(c4d.Medges) selection = tag.GetBaseSelect() edgeSelected = op.GetEdgeS() selection.CopyTo(edgeSelected) c4d.CallCommand(431000015,431000015) # xll bevel tool tool = plugins.FindPlugin(doc.GetAction(), c4d.PLUGINTYPE_TOOL) bcBevel = doc.GetActiveToolData() bcBevel.SetData(c4d.MDATA_BEVEL_OFFSET_MODE, 0) bcBevel.SetData(c4d.MDATA_BEVEL_RADIUS, 1) bcBevel.SetData(c4d.MDATA_BEVEL_SUB, 2) bcBevel.SetData(c4d.MDATA_BEVEL_DEPTH, 1) bcBevel.SetData(c4d.MDATA_BEVEL_SELECTION_PHONG_BREAK, False) c4d.CallButton(tool, c4d.MDATA_APPLY) op.Message(c4d.MSG_UPDATE) c4d.EventAdd()
Hi @del thanks for reaching us,
As you figured the new bevel tool is c4d.ID_XBEVELTOOL (431000015) (I will update the python documentation) but in order to make it work the recommended way is to use SendModelingCommand like so
import c4d doc.StartUndo() # Settings settings = c4d.BaseContainer() settings[c4d.MDATA_BEVEL_MASTER_MODE] = c4d.MDATA_BEVEL_MASTER_MODE_SOLID settings[c4d.MDATA_BEVEL_RADIUS] = 5 settings[c4d.MDATA_BEVEL_SELECTION_PHONG_BREAK] = False doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) res = c4d.utils.SendModelingCommand(command = c4d.ID_XBEVELTOOL, list = [op], mode = c4d.MODELINGCOMMANDMODE_EDGESELECTION, bc = settings, doc = doc) doc.EndUndo() c4d.EventAdd()
You can find a list of all symbols in the cpp documentation of xbeveltool.h.
Cheers,
Maxime.
Hi and thanks for the help.
I tried the script but I don't get any results. No error message and no change to the model. Please keep in mind that I'm in R19 if that makes a difference. I tried it in R20 and it creates a bevel. Not the bevel I expected but it was at least promising that it did something.
Here is what I have based on your info.
import c4d from c4d import utils def main(): bcBevel = c4d.BaseContainer() bcBevel[c4d.MDATA_BEVEL_MASTER_MODE] = c4d.MDATA_BEVEL_MASTER_MODE_CHAMFER bcBevel[c4d.MDATA_BEVEL_OFFSET_MODE] = c4d.MDATA_BEVEL_OFFSET_MODE_FIXED bcBevel[c4d.MDATA_BEVEL_RADIUS] = 1 bcBevel[c4d.MDATA_BEVEL_SUB] = 2 bcBevel[c4d.MDATA_BEVEL_DEPTH] = 1 bcBevel[c4d.MDATA_BEVEL_SELECTION_PHONG_BREAK] = False tags = op.GetTags() for tag in tags: if tag.GetType()==5701: doc.StartUndo() doc.SetMode(c4d.Medges) selection = tag.GetBaseSelect() edgeSelected = op.GetEdgeS() selection.CopyTo(edgeSelected) doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) res = c4d.utils.SendModelingCommand( c4d.ID_XBEVELTOOL, [op], c4d.MODELINGCOMMANDMODE_EDGESELECTION, bcBevel, doc) doc.EndUndo() c4d.EventAdd() if __name__=='__main__': main()
C4D might have been 'messed up' based on my prior attempts. I've restarted it and may be making progress.
Here in R19 it's working nicely. Let me know if you are able to reproduce the issue.
Cheers,
Maxime.
Restarting c4d fixed it! Now I just have to get the hidden polys produced by the bevel to appear. I'm pretty sure I need a message update or something like that as they appear once I move the model.
Thanks for your help.
.del
Does anybody know what format I'm supposed to use to send a number to the Bevel tool for the radius? I'm currently only able to get it to work with whole numbers like 1 or 2. I'd like to set it to .5.
I've tried float(.5) and int(.5) but both end up giving me no bevel. I can't find any documentation for this.
thanks,
.del
found it!!! I had to float('0.5')
settings[c4d.MDATA_BEVEL_RADIUS] = 0.5
Works fine here. | https://plugincafe.maxon.net/topic/11502/calling-bevel-tool-and-setting-parametrs | CC-MAIN-2020-50 | refinedweb | 652 | 54.29 |
This preview shows
page 1. Sign up
to
view the full content.
# include <iostream> # include <limits> using namespace std; int main (void) { int number, temp, min = INT_MAX, max = INT_MIN, range; cout << "Enter the number of elements: "; cin >> number; for (int i = 1; i <= number; i++) { cout << "Enter a number: ";
Unformatted text preview: cin >> temp; if (temp > max) max = temp; if (temp < min) min = temp; } cout << "The smallest number is: " << min << "\n"; cout << "The largest number is: " << max << "\n"; cout << "The range is: " << max - min; return 0; }...
View Full Document
This note was uploaded on 04/24/2010 for the course COS 120 taught by Professor Bonev during the Fall '08 term at American University in Bulgaria.
- Fall '08
- Bonev
Click to edit the document details | https://www.coursehero.com/file/5871140/range/ | CC-MAIN-2017-17 | refinedweb | 125 | 63.43 |
.
What is a Library In Linux or UNIX?
In Linux or UNIX like operating system, a library is noting but a collection of resources such as subroutines / functions, classes, values or type specifications. There are two types of libraries:
- Static libraries - All lib*.a fills are included into executables that use their functions. For example you can run a sendmail binary in chrooted jail using statically liked libs.
- Dynamic libraries or linking [ also known as DSO (dynamic shared object)] - All lib*.so* files are not copied into executables. The executable will automatically load the libraries using ld.so or ld-linux.so.
Linux Library Management Commands
- ldconfig : Updates the necessary links for the run time link bindings.
- ldd : Tells what libraries a given program needs to run.
- ltrace : A library call tracer.
- ld.so/ld-linux.so: Dynamic linker/loader.
Important Files
As a sys admin you should be aware of important files related to shared libraries:
- . This file is created by ldconfig command.
- lib*.so.version : Shared libraries stores in /lib, /usr/lib, /usr/lib64, /lib64, /usr/local/lib directories.
#1: ldconfig command
You need to use the ldconfig command to create, update, and remove the necessary links and cache (for use by the run-time linker, ld.so) to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/usr/lib, /lib64 and /lib). The ldconfig command checks the header and file names of the libraries it encounters when determining which versions should have their links updated. This command also creates a file called /etc/ld.so.cache which is used to speed up linking.
Examples
In this example, you've installed a new set of shared libraries at /usr/local/lib/:
$ ls -l /usr/local/lib/
Sample outputs:
-rw-r--r-- 1 root root 878738 Jun 16 2010 libGeoIP.a -rwxr-xr-x 1 root root 799 Jun 16 2010 libGeoIP.la lrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so -> libGeoIP.so.1.4.6 lrwxrwxrwx 1 root root 17 Jun 16 2010 libGeoIP.so.1 -> libGeoIP.so.1.4.6 -rwxr-xr-x 1 root root 322776 Jun 16 2010 libGeoIP.so.1.4.6 -rw-r--r-- 1 root root 72172 Jun 16 2010 libGeoIPUpdate.a -rwxr-xr-x 1 root root 872 Jun 16 2010 libGeoIPUpdate.la lrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so -> libGeoIPUpdate.so.0.0.0 lrwxrwxrwx 1 root root 23 Jun 16 2010 libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0 -rwxr-xr-x 1 root root 55003 Jun 16 2010 libGeoIPUpdate.so.0.0.0
Now when you run an app related to libGeoIP.so, you will get an error about missing library. You need to run ldconfig command manually to link libraries by passing them as command line arguments with the -l switch:
# ldconfig -l /path/to/lib/our.new.lib.so
Another recommended options for sys admin is to create a file called /etc/ld.so.conf.d/geoip.conf as follows:
/usr/local/lib
Now just run ldconfig to update the cache:
# ldconfig
To verify new libs or to look for a linked library, enter:
# ldconfig -v
# ldconfig -v | grep -i geoip
Sample outputs:
libGeoIP.so.1 -> libGeoIP.so.1.4.6 libGeoIPUpdate.so.0 -> libGeoIPUpdate.so.0.0.0
Troubleshooting Chrooted Jails
You can print the current cache with the -p option:
# ldconfig -p
Putting web server such as Apache / Nginx / Lighttpd in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. It is also necessary to copy all files required by Apache inside the filesystem rooted at /jail/ directory , including web server binaries, shared Libraries, modules, configuration files, and php/perl/html web pages. You need to also copy /etc/{ld.so.cache,ld.so.conf} files and /etc/ld.so.conf.d/ directory to /jail/etc/ directory. Use the ldconfig command to update, print and troubleshoot chrooted jail problems:
### chroot to jail bash chroot /jail /bin/bash ### now update the cache in /jail ### ldconfig ### print the cache in /jail ### ldconfig -p ### copy missing libs ### cp /path/to/some.lib /jail/path/to/some.lib ldconfig ldconfig -v | grep some.lib ### get out of jail ### exit ### may be delete bash and ldconfig to increase security (NOTE path carefully) ### cd /jail rm sbin/ldconfig bin/bash ### now start nginx jail ### chroot /jail /usr/local/nginx/sbin/nginx
Rootkits
A rootkit is a program (or combination of several programs) designed to take fundamental control of a computer system, without authorization by the system's owners and legitimate managers. Usually, rootkit use /lib, /lib64, /usr/local/lib directories to hide itself from real root users. You can use ldconfig command to view all the cache of all shared libraries and unwanted programs:
# /sbin/ldconfig -p | less
You can also use various tools to detect rootkits under Linux.
Common errors
You may see the errors as follows:
Dynamic linker error in foo
Can't map cache file cache-file
Cache file cache-file foo
All of the above errors means the linker cache file /etc/ld.so.cache is corrupt or does not exists. To fix these errors simply run the ldconfig command as follows:
# ldconfig
Can't find library xyz Error
The executable required a dynamically linked library that ld.so or ld-linux.so cannot find. It means a library called xyz needed by the program called foo not installed or path is not set. To fix this problem install xyz library and set path in /etc/ld.so.conf file or create a file in /etc/ld.so.conf.d/ directory.
#2: ldd command
ldd (List Dynamic Dependencies) is a Unix and Linux program to display the shared libraries required by each program. This tools is required to build and run various server programs in a chroot jail. A typical example is as follows to list the Apache server shared libraries, enter:
# ldd /usr/sbin/httpd
Sample outputs:
libm.so.6 => /lib64/libm.so.6 (0x00002aff52a0c000) libpcre.so.0 => /lib64/libpcre.so.0 (0x00002aff52c8f000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00002aff52eab000) libaprutil-1.so.0 => /usr/lib64/libaprutil-1.so.0 (0x00002aff530c4000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002aff532de000) libldap-2.3.so.0 => /usr/lib64/libldap-2.3.so.0 (0x00002aff53516000) liblber-2.3.so.0 => /usr/lib64/liblber-2.3.so.0 (0x00002aff53751000) libdb-4.3.so => /lib64/libdb-4.3.so (0x00002aff5395f000) libexpat.so.0 => /lib64/libexpat.so.0 (0x00002aff53c55000) libapr-1.so.0 => /usr/lib64/libapr-1.so.0 (0x00002aff53e78000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aff5409f000) libdl.so.2 => /lib64/libdl.so.2 (0x00002aff542ba000) libc.so.6 => /lib64/libc.so.6 (0x00002aff544bf000) libsepol.so.1 => /lib64/libsepol.so.1 (0x00002aff54816000) /lib64/ld-linux-x86-64.so.2 (0x00002aff527ef000) libuuid.so.1 => /lib64/libuuid.so.1 (0x00002aff54a5c000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00002aff54c61000) libsasl2.so.2 => /usr/lib64/libsasl2.so.2 (0x00002aff54e76000) libssl.so.6 => /lib64/libssl.so.6 (0x00002aff5508f000) libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002aff552dc000) libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002aff5562d000) libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002aff5585c000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002aff55af1000) libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002aff55cf3000) libz.so.1 => /usr/lib64/libz.so.1 (0x00002aff55f19000) libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002aff5612d000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002aff56335000)
Now, you can copy all those libs one by one to /jail directory
# mkdir /jail/lib # cp /lib64/libm.so.6 /jail/lib # cp /lib64/libkeyutils.so.1 /jail/lib
You can write a bash script to automate the entire procedure:
cp_support_shared_libs(){ local d="$1" # JAIL ROOT local pFILE="$2" # copy bin file libs local files="" ## use ldd to get shared libs list ### files="$(ldd $pFILE | awk '{ print $3 }' | sed '/^$/d')" for i in $files do dcc="${i%/*}" # get dirname only [ ! -d ${d}${dcc} ] && mkdir -p ${d}${dcc} ${_cp} -f $i ${d}${dcc} done # Works with 32 and 64 bit ld-linux sldl="$(ldd $pFILE | grep 'ld-linux' | awk '{ print $1}')" sldlsubdir="${sldl%/*}" [ ! -f ${d}${sldl} ] && ${_cp} -f ${sldl} ${d}${sldlsubdir} }
Call cp_support_shared_libs() it as follows:
cp_support_shared_libs "/jail" "/usr/local/nginx/sbin/nginx"
Report Missing Functions
Type the following command:
$ ldd -d /path/to/executable
Report Missing Objects
Type the following command:
$ ldd -r /path/to/executable
Determine If Particular Feature Supported Or Not
TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet. TCP wrappers was original written to monitor and stop cracking activities on the UNIX / Linux systems. To determine whether a given executable daemon supports TCP Wrapper or not, run the following command:
$ ldd /usr/sbin/sshd | grep libwrap
Sample outputs:
libwrap.so.0 => /lib64/libwrap.so.0 (0x00002abd70cbc000)
The output indicates that the OpenSSH (sshd) daemon supports TCP Wrapper.
Other usage of ldd command
You can use the ldd command when an executable is failing because of a missing dependency. Once you found a missing dependency, you can install it or update the cache with the ldconfig command as mentioned above.
#3: ltrace command
The ltrace command command.
# ltrace /usr/sbin/httpd
# ltrace /sbin/chroot /usr/sbin/httpd
# ltrace /bin/ls
Sample outputs:
__libc_start_main(0x804fae0, 1, 0xbfbd6544, 0x805bce0, 0x805bcd0
strrchr("/bin/ls", '/') = "/ls" setlocale(6, "") = "en_IN.utf8" bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale" textdomain("coreutils") = "coreutils" __cxa_atexit(0x8052d10, 0, 0, 0xbfbd6544, 0xbfbd6498) = 0 isatty(1) = 1 getenv("QUOTING_STYLE") = NULL getenv("LS_BLOCK_SIZE") = NULL getenv("BLOCK_SIZE") = NULL getenv("BLOCKSIZE") = NULL getenv("POSIXLY_CORRECT") = NULL getenv("BLOCK_SIZE") = NULL getenv("COLUMNS") = NULL ioctl(1, 21523, 0xbfbd6470) = 0 getenv("TABSIZE") = NULL getopt_long(1, 0xbfbd6544, "abcdfghiklmnopqrstuvw:xABCDFGHI:"..., 0x0805ea40, -1) = -1 __errno_location() = 0xb76b8694 malloc(40) = 0x08c8e3e0 memcpy(0x08c8e3e0, "", 40) = 0x08c8e3e0 .... .... ..... .. output truncated free(0x08c8e498) = free(NULL) = free(0x08c8e480) = exit(0 __fpending(0xb78334e0, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0 fclose(0xb78334e0) = 0 __fpending(0xb7833580, 0xbfbd6334, 0xb78876a3, 0xb78968f8, 0) = 0 fclose(0xb7833580) = 0 +++ exited (status 0) +++
The ltrace command is a perfect debugging utility in Linux:
- To monitor the library calls used by a program and all the signals it receives.
- For tracking the execution of processes.
- It can also show system calls, used by a program.
ltrace Command Examples
Consider the following c program:
#include <stdio.h> int main(){ printf("Hello world\n"); return 0; }
Compile and run it as follows:
$ cc hello.c -o hello
$ ./hello
Now use the ltrace command to tracking the execution of processes:
$ ltrace -S -tt ./hello
Sample outputs:
15:20:38.561616 SYS_brk(NULL) = 0x08f42000 15:20:38.561845 SYS_access("/etc/ld.so.nohwcap", 00) = -2 15:20:38.562009 SYS_mmap2(0, 8192, 3, 34, -1) = 0xb7708000 15:20:38.562155 SYS_access("/etc/ld.so.preload", 04) = -2 15:20:38.562336 SYS_open("/etc/ld.so.cache", 0, 00) = 3 15:20:38.562502 SYS_fstat64(3, 0xbfaafe20, 0xb7726ff4, 0xb772787c, 3) = 0 15:20:38.562629 SYS_mmap2(0, 76469, 1, 2, 3) = 0xb76f5000 15:20:38.562755 SYS_close(3) = 0 15:20:38.564204 SYS_access("/etc/ld.so.nohwcap", 00) = -2 15:20:38.564372 SYS_open("/lib/tls/i686/cmov/libc.so.6", 0, 00) = 3 15:20:38.564561 SYS_read(3, "\177ELF\001\001\001", 512) = 512 15:20:38.564694 SYS_fstat64(3, 0xbfaafe6c, 0xb7726ff4, 0xb7705796, 0x8048234) = 0 15:20:38.564822 SYS_mmap2(0, 0x1599a8, 5, 2050, 3) = 0xb759b000 15:20:38.565076 SYS_mprotect(0xb76ee000, 4096, 0) = 0 15:20:38.565209 SYS_mmap2(0xb76ef000, 12288, 3, 2066, 3) = 0xb76ef000 15:20:38.565454 SYS_mmap2(0xb76f2000, 10664, 3, 50, -1) = 0xb76f2000 15:20:38.565604 SYS_close(3) = 0 15:20:38.565709 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb759a000 15:20:38.565842 SYS_set_thread_area(0xbfab030c, 0xb7726ff4, 0xb759a6c0, 1, 0) = 0 15:20:38.566070 SYS_mprotect(0xb76ef000, 8192, 1) = 0 15:20:38.566185 SYS_mprotect(0x08049000, 4096, 1) = 0 15:20:38.566288 SYS_mprotect(0xb7726000, 4096, 1) = 0 15:20:38.566381 SYS_munmap(0xb76f5000, 76469) = 0 15:20:38.566522 __libc_start_main(0x80483e4, 1, 0xbfab04e4, 0x8048410, 0x8048400
15:20:38.566667 puts("Hello world" 15:20:38.566811 SYS_fstat64(1, 0xbfab0310, 0xb76f0ff4, 0xb76f14e0, 0x80484c0) = 0 15:20:38.566936 SYS_mmap2(0, 4096, 3, 34, -1) = 0xb7707000 15:20:38.567126 SYS_write(1, "Hello world\n", 12Hello world ) = 12 15:20:38.567282 <... puts resumed> ) = 12 15:20:38.567348 SYS_exit_group(0 15:20:38.567454 +++ exited (status 0) +++
You need to carefully monitor the order and arguments of selected functions such as open() [used to open and possibly create a file or device] or chown() [used to change ownership of a file] so that you can spot simple kinds of race conditions or security related problems. This is quite useful for evaluating the security of binary programs to find out what kind of changes made to the system.
ltrace: Debugging Memory & I/O Usage For HA Based Cluster Computers
The ltrace command can be used to trace memory usage of the malloc() and free() functions in C program. You can calculate the amount of memory allocated as follows:
[node303 ~]$ ltrace -e malloc,free ./simulator arg1 agr2 arg3
The ltrace will start ./simulator program and it will trace the malloc() and free() functions. You can find out I/O problems as follows:
[node303 ~]$ ltrace -e fopen,fread,fwrite,fclose ./simulator arg1 agr2 arg3
You may need to change function names as your programming languages or UNIX platform may use different memory allocation functions.
#4: ld.so/ld-linux.so Command
The ld.so or / ld-linux.so used as follows by Linux:
- To load the shared libraries needed by a program.
- To prepare the program to run, and then runs it.
List All Dependencies and How They Are Resolved
Type the following command:
# cd /lib
For 64 bit systems:
# cd /lib64
Pass the --list option, enter:
# ./ld-2.5.so --list /path/to/executable
Other options
From the man page:
--verify verify that given object really is a dynamically linked object we can handle --library-path PATH use given PATH instead of content of the environment variable LD_LIBRARY_PATH --inhibit-rpath LIST ignore RUNPATH and RPATH information in object names in LIST
Environment Variables
The LD_LIBRARY_PATH can be used to set a library path for finding dynamic libraries using LD_LIBRARY_PATH, in the standard colon seperated format:
$ export LD_LIBRARY_PATH=/opt/simulator/lib:/usr/local/lib
The LD_PRELOAD allow an extra library not specified in the executable to be loaded:
$ export LD_PRELOAD=/home/vivek/dirhard/libdiehard.so
Please note that these variables are ignored when executing setuid/setgid programs.
Recommend readings:
- HowTo: Debug Crashed Linux Application Core Files Like A Pro
- Debugging Tip: Trace the Process and See What It is Doing with strace
- man pages ldconfig, ld.so, ldd, ltrace
- Dynamic Linking and Loading and Shared libraries
- How to create shared library under Linux with lots of information about internal structures
- Anatomy of Linux dynamic libraries - This article investigates the process of creating and using dynamic libraries, provides details on the various tools for exploring them, and explores how these libraries work under the }
Don’t forget that some distributions allow for .conf files to be placed in ld.so.conf.d that contain path information as well.
Thanks, Vivek. This is so informational. I have bookmarked this page for ready reference.
Great reading, thank you!
thanks for this detailed steps, keep posting, one question in meego where can i configure a shutdown button in desktop ( sorry 4 this offtopic qn)
Awesome read,
Thanks Vivek.
Very Nice! thank you!
Hi, I am having a problem to run this script. i do not know why so can you solve this. i think is a mistake in script. Can you help me to solve this mistake.?
Also thanks for the article. ldtrace proved very usful for some debugging problems!
s/ldtrace/ltrace/ ;)
helped a lot…nice matter
Hi Vivek,
Useful topic,
Is it a little typo or wrong word here: “Determine If Particulate Feature Supported Or Not”
Should it be: Determine if __particular__ feature _is_ supported or not
(and why so much Capital letters?)
–P
Philippe,
Thanks for the heads up :)
(and why so much Capital letters?)
I guess bad writing style..
great writeup, thanks. | http://www.cyberciti.biz/tips/linux-shared-library-management.html | CC-MAIN-2015-11 | refinedweb | 2,704 | 58.69 |
I recently reviewed the book, Arista Warriors. Obviously the Python chapter interested me the most, here is the result of the output as I tried the example in the book to modify the 'show version' output with a few lines of Python.
***** Experiment on Arista CliPlugin *****
[user@switch CliPlugin]$ pwd
/usr/lib/python2.7/site-packages/CliPlugin
[user@switch CliPlugin]$ sudo vi VersionCli.py
def showVersion( mode, detail=None ):
<skip>
# Print commands (delete after)
print "*" * 10
print "I dont really like Pie"
print "*" * 10
[user@switch ~]$ Cli -c "show version"
**********
I dont really like Pie
**********
Arista DCS-7504
Hardware version: 02.00
<skip>
Software image version: 4.10.3
Architecture: i386
Internal build version: 4.10.3-937242.EOS4103
Internal build ID: a229c9db-af32-4e62-a4f7-5711e977d968
Uptime: 6 weeks, 1 day, 22 hours and 51 minutes
Total memory: 4100488 kB
Free memory: 1758148 kB
[user@switch ~]$
That is pretty cool. But how do I write more native looking commands? Or how do I write an agent that can mount to SysDB directly? I dont know, but here are the modules that I plan to look more into for the command part, stay tuned:
import Tac, CliParser, BasicCli, os, Tracing, EosVersion, Ethernet
***** Review *****
Here is the Review as appeared on Amazon:
I have been working with Arista switches for a while now, this is the manual that probably should have came with the Arista switches. As the author mentioned, the command syntax the EOS format is very similar to Cisco IOS. In fact, I have heard of stores where engineers simply copy and paste IOS configurations into EOS during migration and worked just fine. However, to tap into the capabilities that makes Arista a game-changer one has to get into the realms of SystemDB, Python, Linux user space, etc. Anybody can type into commands, but the real challenge lies within the impact and scope of what you are trying to do. This book does a good job of doing the practical stuff that you can use in your day-to-day, as well as the concepts behind them.
Overall, I would recommend this as a solid investment of money and time for anybody looking into Arista switches.
Pro:
- Real world examples.
- Solid explanation of concepts.
- Sense of humor for an otherwise dry subjects.
Notes, suggestions, erratas:
1. Maybe more coverage into the current fat tree design with spine/leaf/core, etc. This is one area that Arista differs from competitor for the number of ECMP next-hop, tcam division of host routes, etc.
2. Power draw is critically important in large scale data centers, Arista has some good innovation in this area with PHY-less design.
*** Virtual Machines on Arista ***
1. If you don't have an Arista switch handy to practice, or just want a safe environment to practice with, you can run vEOS off a VM: vEOS, by Andre Pech.
2. When you are in a pinch, you can also run another VM direction in EOS: Running Virtual Machines in EOS, by Mark Berly.
*** sFlow ***
The whole chapter on sFlow probably warrants more coverage. This is one important telemetry tool that offers lots of information and the right direction going forward, IMO. It offers the ability to do push telemetry vs. pull such as SNMP that offers more scalability.
It is also important in the sense of data center billing for the counter. If you are, say, Yahoo and have one of the biggest Hadoop cluster. You would want to know who is your top talker so you can bill them the network overhead accordingly. This is typically done with NetFlow that exports to collector (more on it in a bit), but if you have a network of Arista switches that does not cross the core, sFlow counter is your current best bet.
Because aggregation is done in the onboard flow cache 'before' it sends to collector, NetFlow often falls down in even moderate amount of traffic in data centers. You are forced to scale down on the flow sampling rate that increases the error delta. sFlow on the other hand, just samples and push all the intelligence into the collector.
The author hints at this, but here is an early peak on troubleshooting data plan traffic with sflowtools:
1. Running the open source sflowtools directly on Arista switches for troubleshooting data plan traffic that does not cross CPU..
arista#bash sudo /mnt/flash/sflowtool -t | tcpdump -r - -vv
*** Python ***
Python should have more coverage in the book as that is what Arista CLI is built on. Just some pointers toward motivated Network Engineers which modules to look more into and the location of the files would be helpful.
1. I have asked when Python 3 will be included in Arista, best guess is when Fedora updates their OS to make 3 default.
*** Random Notes about the book ***
1. I think 'sh run all interface' was in 4.7.x, then for some reason went away in 4.8.x, then came back after 4.9.x.
2. I wish the book covers more on the SysDB mount points that Mark Barley points out on EOS Central.
3. IPv6 chapter in the works with 4.10.x code?
4. Nice tip about generating traffic at 'ping -s 15000 -c 10000 10.10.10.15 > /dev/null &' I have done that before but couldn't see the traffic right away and killed it.
5. Why woundn't cron work on Arista (chapter 23)?
6. Nice tips: didn't know that tcpdump can be executed directly from EOS, files other than selected few locations do not survive reloads, emails, etc.
7. ZTP chapter: chapter typo, should be EOS 4.7 and after, not 3.7.
8. ZTP chapter: instead of identifying by mac address, should identify via relay agent or the place of kingdom (show lldl) via script instead of by mac address. The mac address change due to RMA, typo. Also manually mocking DHCP config file does not scale.
9. Event-Handler chapter: More event-handler trigger is indeed needed in Arista in order for the feature to be more useful.
10. Event-Handler chapter: tJust like regular bash script, you can 'demonize' and chain the commands with ';'.
22. Event-handler chapter: There is at least one bug in event-handler in 4.8.3 that configuring 'on boot-up' triggers the event-handler right away. Be careful if the startup script include anything that is production impacting.
23. I like the 'advance usage of sqlite' a lot, gives me some ideas for using sqlite for other features as well. Maybe show the Python integration with sqlite for script purposes?
27. I like what the author pointed out the different between the default flash: location vs. having to specify full Unix path via file: command. I wish I had known this, would've saved me a some time copying stuff from /var/log -> /mnt/flash -> transfer.
28. CloudVision: I wouldn't recommend the use of XMPP in production either. Use the upcoming JSon API instead.
29. Page 360, pretty sure that 'spline' is a typo for 'spine'.
30. Here is a talk by Andy Bechtolsheim in NANOG 55, helps to understand Arista's vision:
*** Commands that I wish the book included ***
1. favorite command: 'show interface counters rates | nz'
2. switch(s1)#sh logging last ?
<1-9999> Number of time units (sec|min|hr|day)
nice post | https://blog.pythonicneteng.com/2012/12/book-review-arista-warriors.html | CC-MAIN-2022-27 | refinedweb | 1,239 | 72.56 |
I'm supposed to write a program which prompts the user to enter a sentence and then prompt the user to enter a single letter of the alphabet (into a char value). It's also supposed to count and display the number of words containing the letter entered (not case sensitive) and the total number of occurrences of the letter entered (not case sensitive). An example if the user entered the sentence: 'She sells seashells by the seashore.' and the letter 'S', the output from the program would be:
4 words contain the letters S.
8 S are found in the sentence.
import java.util.StringTokenizer; import java.util.Scanner; public class CountSent { public static void main(String[]args) { Scanner scan = new Scanner(System.in); String sentInput = null, oneWord = null; StringTokenizer st = null; String letter = ""; System.out.println("\n\nThis program asks you to type in a sentence,"); System.out.println("it then requires a single letter of the alphabet"); System.out.println("to be entered in and displays the number of words"); System.out.println("as well as the total number of occurrences."); System.out.print("\n\nEnter a sentence: "); sentInput = scan.nextLine(); System.out.print("\n\nEnter a letter: "); numInput = scan.nextLine(); sentInput = sentInput.substring(0, sentInput.length()-1); st = new StringTokenizer(sentInput); oneWord = st.nextToken(); letter = oneWord; while (st.countTokens() > 0) { oneWord = st.nextToken(); if (oneWord.length() >= letter.length()) letter = oneWord; for (int index = 0; index < oneWord.length(); index++) if (oneWord.charAt(index) == 'G' || oneWord.charAt(index) == 'g') { System.out.println(oneWord); index = oneWord.length (); } } System.out.println("\n\n" + letter + "letters contain the letter 'G'."); System.out.println( + letter.length() + " are found in the sentence."); } }
The above is how far I've gotten on the coding. The obvious problem is, I have no clue how to contain the letter entered by the user and have it displayed the number of times it was printed in the sentence. The only, for lack of a better word, lead I have is the tidbit with the letter 'G'. | http://www.javaprogrammingforums.com/whats-wrong-my-code/3656-sentence-letter-count-program.html | CC-MAIN-2013-48 | refinedweb | 339 | 61.73 |
Hello guys.
I'm a newbie in OpenGL and i'm trying to make a program with keyboard events, just to make some tests.
I'm using Linux Debian e g++ to compile the code which i'm making with c++.
The problem is when i run my program, what i see is:
I thought the "main" method would be waiting i push a button on the keyboard. I'm saying this because of "glutMainLoop();".I thought the "main" method would be waiting i push a button on the keyboard. I'm saying this because of "glutMainLoop();".Object created.
Object killed with success.
Why it doesn't happen, and the "glutMainLoop();" method does not enter in loop?
Code ://keyboard.h #ifndef __keyboard_h__ #define __keyboard_h__ #include<iostream> #include<GL/gl.h> #include<GL/glut.h> #include<GL/glx.h> using namespace std; class Keyboard { public: Keyboard(void); ~Keyboard(void); void keyPressed (unsigned char key, int x, int y); }; #endifCode ://keyboard.cpp #include"keyboard.h" Keyboard :: Keyboard(void) { cout<<"Object created."<<endl; } Keyboard :: ~Keyboard(void) { cout<<"Object killed with success."<<endl; } void Keyboard :: keyPressed(unsigned char key, int x, int y) { switch(key) { case 'j': cout<<"Nothing, just testing."<<endl; break; } }Code ://main.cpp #include"keyboard.h" Keyboard global_keyboard; void keypress_wrapper(unsigned char key, int x, int y) { global_keyboard.keyPressed(key, x, y); } int main(int argc, char **argv) { glutInit(&argc, argv); glutKeyboardFunc(keypress_wrapper); glutMainLoop(); }
To compile:
What i'm doing wrong?What i'm doing wrong?g++ -o test main.cpp keyboard.cpp -lglut | https://www.opengl.org/discussion_boards/showthread.php/180885-Problem-with-mainloop?p=1247413&viewfull=1 | CC-MAIN-2015-18 | refinedweb | 253 | 60.01 |
Linux is a free, multi-threading, multiuser operating system that has been ported to several different platforms and processor architectures. This chapter gives an overview of the common System Model of Linux, which is also a base for the maemo platform. The concepts described in this chapter include kernel, processes, memory, filesystem, libraries and linking.
The kernel is the very heart of a Linux system. It controls the resources, memory, schedules processes and their access to CPU and is responsible of communication between software and hardware components. The kernel provides the lowest-level abstraction layer for the resources like memory, CPU and I/O devices. Applications that want to perform any function with these resources communicate with the kernel using
system calls.
System calls are generic functions (such as
write) and they will handle the actual work with different devices, process management and memory management. The advantage in
system calls is that the actual call stays the same regardless of the device or resource being used. Porting the software for different versions of the operating system also becomes easier when the
system calls are persistent between versions.
Kernel memory protection divides the virtual memory into
kernel space and
user space.
Kernel space is reserved for the kernel, its' extensions and the device drivers.
User space is the area in memory where all the user mode applications work. User mode application can access hardware devices, virtual memory, file management and other kernel services running in
kernel space only by using
system calls. There are over 100
system calls in Linux, documentation for those can be found from the Linux kernel system calls manual pages (
man 2 syscalls).
Kernel architecture, where the kernel is run in
kernel space in supervisor mode and provides
system calls to implement the operating system services is called monolithic kernel. Most modern monolithic kernels, such as Linux, provides a way to dynamically load and unload executable kernel modules at runtime. The modules allow extending the kernel's capabilities (for example adding a device driver) as required without rebooting or rebuilding the whole kernel image. In contrast, microkernel is an architecture where device drivers and other code are loaded and executed on demand and are not necessarily always in memory.
[ Kernel, hardware and software relations ]
A process is a program that is being executed. It consists of the executable program code, a bunch of resources (for example open files), an address space, internal data for kernel, possibly threads (one or more) and a data section.
Each process in Linux has a series of characteristics associated with it, below is a (far from complete) list of available data:
Every process has a PID, process ID. This is a unique identification number used to refer to the process and a way to the system to tell processes apart from each other. When user starts the program, the process itself and all processes started by that process will be owned by that user (process RUID), thus processes' permissions to access files and system resources are determined by using permissions for that user.
To understand the creation of a process in Linux, we must first introduce few necessary subjects,
fork,
exec, parent and child. A process can create an exact clone of itself, this is called
forking. After the process has forked itself the new process that has born gets a new PID and becomes a child of the forking process, and the forking process becomes a parent to a child. But, we wanted to create a new process, not just a copy of the parent, right? This is where
exec comes into action, by issuing an
exec call to the system the child process can overwrite the address space of itself with the new process data (executable), which will do the trick for us.
This is the only way to create new processes in Linux and every running process on the system has been created exactly the same way, even the first process, called
init (PID 1) is forked during the boot procedure (this is called
bootstrapping).
If the parent process dies before the child process does (and parent process does not explicitly handle the killing of the child),
init will also become a parent of
orphaned child process, so its' PPID will be set to 1 (PID of
init).
[ Example of the fork-and-exec mechanism ]
Parent-child relations between processes can be visualized as a hierarchical tree:
init-+-gconfd-2 |-avahi-daemon---avahi-daemon |-2*[dbus-daemon] |-firefox---run-mozilla.sh---firefox-bin---8*[{firefox-bin}] |-udevd ...listing cut for brewity...
[ A part of a hierarchical tree of processes ]
The tree structure above also states difference between
programs and
processes. All processes are executable programs, but one program can start multiple processes. From the tree structure you can see that running
Firefox web browser has created 8 child processes for various tasks, still being one program. In this case, the preferred word to use from Firefox would be an application.
Whenever a process terminates normally (program finishes without intervention from outside), the program returns numeric
exit status (return code) to the parent process. The value of the return code is program-specific, there is no standard for it. Usually
exit status 0, means that process terminated normally (no error). Processes can also be ended by sending them a
signal. In Linux there are over 60 different signals to send to processes, most commonly used are listed here:
Only the user owning the process (or
root) can send these signals to the process. Parent process can handle the
exit status code of the terminating child process, but is not required to do so, in which case the
exit status will be lost.
Linux follows the UNIX-like operating systems concept called unified hierarchical namespace. All devices and filesystem partitions (they can be local or even accessible over the network) apper to exist in a single hierarchy. In filesystem namespace all resources can be referenced from the root directory, indicated by a forward slash (/), and every file or device existing on the system is located under it somewhere. You can access multiple filesystems and resources within the same namespace: you just tell the operating system the location in the filesystem namespace where you want the specific resource to appear. This action is called
mounting, and the namespace location where you attach the filesystem or resource is called a
mount point.
The mounting mechanism allows establishing a coherent namespace where different resources can be overlaid nicely and transparently. In contrast, the filesystem namespace found in Microsoft operating systems is split into parts and each physical storage is presented as a separate entity, e.g. C:\ is the first hard drive, E:\ might be the CD-ROM device.
Example of mounting: Let us assume we have a memory card (MMC) containing three directories, named first, second and third. We want the contents of the MMC card to appear under directory /media/mmc2. Let us also assume that our
device file of the MMC card is /dev/mmcblk0p1. We issue the
mount command and tell where in the filesystem namespace we would like to
mount the MMC card.
/ $ sudo mount /dev/mmcblk0p1 /media/mmc2 / $ ls -l /media/mmc2 total 0 drwxr-xr-x 2 user group 1 2007-11-19 04:17 first drwxr-xr-x 2 user group 1 2007-11-19 04:17 second drwxr-xr-x 2 user group 1 2007-11-19 04:17 third / $
[ After mounting, contents of MMC card can be seen under directory /media/mmc2 ]
In addition to physical devices (local or networked), Linux supports several
pseudo filesystems (virtual filesystems) that behave like normal filesystems, but do not represent actual persistent data, but rather provide access to system information, configuration and devices. By using pseudo filesystems, operating system can provide more resources and services as part of the filesystem namespace. There is a saying that nicely describes the advantages: "In UNIX, everything is a file".
Examples of
pseudo filesystems in Linux and their contents:
device files, e.g. devices as files. An abstraction for accessing I/O an peripherals. Usually mounted under /dev
Using pseudo filesystems provides a nice way to access kernel data and several devices from userspace processes using same API and functions than with regular files.
Most of the Linux distributions (as well as maemo platform) also follow the Filesystem Hierarchy Standard (FHS) quite well. FHS is a standard which consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems.
[ Example of filesystem hierarchy ]
The most important directories and their contents:
single user mode.
init,
insmod,
ifup)
single user mode
More information about FHS can be found from its' homepage at.
For most users understanding the tree-like structure of the filesystem namespace is enough. In reality, things get more compilated than that. When a physical storage device is taken into use for the first time, it must be partitioned. Every partition has its own filesystem, which must be initialized before first usage. By mounting the initialized filesystems, we can form the tree-structure of the entire system.
When a filesystem is created to partition, a data structures containing information about files are written into it. These structures are called
inodes. Each file in filesystem has an
inode, identified by inode serial number.
Inode contains the following information of the file:
The only information not included in an inode is the file name and directory. These are stored in the special directory files, each of which contains one filename and one inode number. The kernel can search a directory, looking for a particular filename, and by using the inode number the actual content of the inode can be found, thus allowing the content of the file to be found.
Inode information can be queried from the file using
stat command:
user@system:~$ stat /etc/passwd File: `/etc/passwd' Size: 1347 Blocks: 8 IO Block: 4096 regular file Device: 805h/2053d Inode: 209619 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2007-11-23 06:30:49.199768116 +0200 Modify: 2007-11-17 21:27:02.271803959 +0200 Change: 2007-11-17 21:27:02.771832454 +0200
[ Using
stat command to view inode information ]
Notice that the
stat command shown above is from desktop Linux, as maemo platform does not have
stat -command by default.
Some advantages of
inodes:
inode. This is called
hard linkingor just linking. Notice that
hard linksonly work within one partition as the inode numbers are only unique within a given partition.
In addition to hard links, Linux filesystem also supports
soft links, or more commonly called,
symbolic links (or for short:
symlinks). A symbolic link contains the path to the target file instead of a physical location on the hard disk. Since
symlinks do not use inodes,
symlinks can span across the partitions. Symlinks are transparent to processes which read or write to files pointed by a symlink, process behaves as if operating directly on the target file.
The Linux security model is based on the UNIX security model. Every user on the system has an user ID (
UID) and every user belongs to one or more groups (identified by group ID,
GID). Every file (or directory) is owned by a
user and a
group user. There is also a third category of users,
others, those are the users that are not the owners of the file and do not belong to the group owning the file.
For each of the three user categories, there are three permissions that can either be granted or denied:
r)
w)
x)
The file permissions can be checked simply by issuing a
ls -l command:
/ $ ls -l /bin/ls -rwxr-xr-x 1 root root 78004 2007-09-29 15:51 /bin/ls / $ ls -l /tmp/test.sh -rwxrw-r-- 1 user users 67 2007-11-19 07:13 /tmp/test.sh
[ Example of file permissions in
ls output ]
The first 10 characters in output describe the file type and
permission flags for all three user categories:
-means regular file,
dmeans directory,
lmeans symlink)
actual ownerof the file (
-means denied)
group ownerof the file (
-means denied)
other users(
-means denied)
The output also lists the
owner and the
group owner of the file, in this order.
Let us look closer what this all means. For the first file, /bin/ls, the owner is
root and
group owner is also root. First three characters (rwx) indicate that
owner (root) has read, write and execute permissions to the file. Next three characters (r-x) indicate that the
group owner has read and execute permissions. Last three characters (r-x) indicate that all other users have read and execute permissions.
The second file, /tmp/test.sh, is owned by
user and group owned by users belonging to
users group. For
user (owner) the permissions (rwx) are read, write and execute. For users in
users group the permissions (rw-) are read and write. For all other users the permissions (r--) are only read.
File permissions can also be presented as octal values, where every user category is simply identified with one octal number. Octal values of the permission flags for one category are just added together using following table:
Octal value Binary presentation Read access 4 100 Write access 2 010 Execute access 1 001 Example: converting "rwxr-xr--" to octal: First group "rwx" = 4 + 2 + 1 = 7 Second group "r-x" = 4 + 1 = 5 Third group "r--" = 4 = 4 So "rwxr-xr--" becomes 754
[ Converting permissions to octal values ]
As processes run effectively with permissions of the user who started the process, the process can only access the same files as the user.
Root-account is a special case: Root user of the system can override any permission flags of the files and folders, so it is very adviseable to not run unneeded processes or programs as
root user. Using
root account for anything else than system administration is not recommended.
As we earlier stated, processes are programs executing in the system. Processes are, however, also a bit more than that: They also include set of resources such as open files, pending signals, internal kernel data, processor state, an address space, one or more threads of execution, and a data section containing global variables. Processes, in effect, are the result of running program code. A program itself is not a process. A process is an active program and related resources.
Threads are objects of activity inside the process. Each thread contains a program counter, process stack and processor registers unique to that thread. Threads require less overhead than forking a new process because the system does not initialize a new system virtual memory space and environment for the process. Threads are most effective on multi-processor or multi-core systems where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing.
Daemons are processes that run in the background unobtrusively, without any direct control from a user (by disassociating the daemon process from the controlling TTY). Daemons can perform various scheduled and non-scheduled tasks and often they serve the function of responding to the requests from other computers over a network, other programs or hardware activity. Daemons have usually
init as their
parent process, as daemons are launched by forking a child and letting the parent process die, in which case
init adopts the process. Some examples of the daemons are: httpd (web server), lpd (printing service), cron (command scheduler).
Linking refers to combining a program and its libraries into a single executable and resolving the symbolic names of variables and functions into their resulting addresses. Libraries are a collection of commonly used functions combined into a package.
There are few types of linking:
In addition to being loaded statically or dynamically, libraries can also be
shared. Dynamic libraries are almost always shared, static libraries can not be shared at all. Sharing allows same library to be used by multiple programs at the same time, sharing the code in memory.
Dynamic linking of shared libraries provides multiple benefits:
Most Linux systems use almost entirely dynamic libraries and dynamic linking.
Below is an image which shows the decomposition of a very simple command-line-program in Linux, dynamically linking only to
glibc (and possibly
Glib). As Glib is so commonly used in addition to standard C library (
glibc), those libraries have been drawn to one box, altough they are completely separate libraries. The hardware devices are handled by the kernel and the program only accesses them through the system call (
syscall) interface.
[ Decomposition of a simple command-line program ] | http://maemo.org/maemo_training_material/maemo4.x/html/maemo_Technology_Overview_Chinook/Chapter_01_The_Linux_System_Model.html | CC-MAIN-2014-42 | refinedweb | 2,785 | 50.87 |
Developing.
It is this ability to have web pages that update dynamically
that is changing the way users interact with the web. For
example:
AJAX isn't the best acronym in the world: it stands for
Asynchronous JavaScript and XML. This does nothing to
describe the benefits to a user: the technology behind it does not
have to be asynchronous, and the best implementations don't
necessarily use XML, either. However, the buzzword has stuck so we
are better off going with the flow now.
The problem for the web developer is that while this is a very
attractive way of creating websites, and one where you can get
started without a huge amount of effort, there are a number of
pitfalls that can make life harder. All browsers have
different quirks, so you can easily discover that, for example, you
have locked Mac users out of the party.
DWR, hosted on
java.net, is an Java open source library that helps developers
to write websites that include AJAX technology. Its mantra
is "Easy
Ajax for Java." It allows code in a web browser to use Java
functions running on a web server as if they were in the browser.. The emphasis will not be on fancy
graphics or lots of chat features, because that would distract
us from the core business of how to write AJAX code without lots of
effort.
The Chat Web Page
The web page has two parts: one area where you can see the
messages that others type, and an input field where you can type
messages yourself. Figure 1 shows what it looks like.
Figure 1. The chat web page
The HTML is very simple:
Messages:
Your Message:
We'll come to the JavaScript code in a bit, but let's start with
the server side. Just how much code do you need for a multi-user
web-based chat system?
Server-Side Java
We have two classes to do the server-side work. The first is the
Message class that holds a single string entered by
the user. The
Message also maintains a unique ID as a
property. For now, we are going to cheat by using the current time
in milliseconds as the ID:
public class Message
{
public Message(String newtext)
{
text = newtext;
if (text.length() > 256)
{
text = text.substring(0, 256);
}
text = text.replace('<', '[');
text = text.replace('&', '_');
}
public long getId()
{
return id;
}
public String getText()
{
return text;
}
long id = System.currentTimeMillis();
String text;
}
The constructor does a few simple things: it shortens messages to
256 characters and replaces
< with
[
and
& with
_ to prevent abuse. All
fairly simple so far.
The other class in the server is the
Chat class
that keeps track of the messages sent to the server--also very
simple:
public class Chat
{
public List addMessage(String text)
{
if (text != null &&
text.trim().length() > 0)
{
messages.addFirst(new Message(text));
while (messages.size() > 10)
{
messages.removeLast();
}
}
return messages;
}
public List getMessages()
{
return messages;
}
static LinkedList messages =new LinkedList();
}
And that's it for the server-side code!
Two of these methods are important from the web browser's point
of view:
addMessage(), which is called in response to
a user typing in the input area, and
getMessages(),
which is polled from time to time to see if anyone else has said
anything.
Configuring DWR
Now we need to remote these two methods to the web browser.
The first step is to copy dwr.jar into your web app.
You can download dwr.jar from its "">java.net project. Next, you need to
configure your app server's web.xml to understand DWR. The
standard bit of code looks like this:
dwr-invoker
DWR Servlet
uk.ltd.getahead.dwr.DWRServlet
debug
true
dwr-invoker
/dwr/*
Finally, you need to tell DWR about the chat server you just
created. Specifically, you need to tell it two things:
- That
Chatis safe to be remoted to the
browser.
- That
Messageis allowed as a parameter.
DWR could do the second bit for you, but we'll do it this way to
make sure that you don't give away access to anything by mistake.
The DWR configuration file, dwr.xml, is placed alongside
web.xml in your WEB-INF folder. For your chat application,
dwr.xml should look like this (obviously, replace the
[your.package] bits with the package that you used
from the code above):
"-//GetAhead Limited//DTD Direct Web Remoting 1.0//EN"
"">
We are telling DWR it is OK to create
Chat
classes for remoting to the browser, and that in JavaScript they
will be called
'Chat'. It also says that
Message is safe to use as a parameter.
The Client-Side Scripting
The final bit is the JavaScript that is fired off by the HTML to
call into the Java code. The good news is that DWR makes this bit
easy. Typically, the JavaScript code for this type of thing would
contain complex
XMLHttpRequest code, DOM manipulation,
and parameter collation. With DWR, you don't worry about any of
that.
First we include the JavaScript that tells the browser about the
Chat code. There are there useful script lines:
The script engine.js contains the core of DWR. Generally, you just
include it as is, and then ignore it. There are a few methods in it
that are sometimes useful, but you can see the "">full
documentation for them at the DWR website. The script util.js
contains some utility functions that are totally optional, but will
help you greatly in getting anything done with DWR. Chat.js
is dynamically generated by DWR as the remote version of
Chat.java. If you look at it, you'll see something
like this:
[/prettify][/prettify]
[prettify] Chat.addMessage = function(callback, p0) { ... } Chat.getMessages = function(callback) { ... }
DWR does everything it can to make the JavaScript version of
your Java code as simple as possible, but there are some things you
need to be aware of. The most obvious is that the "A" in AJAX
stands for asynchronous; so by definition, the remote method is
not executed the instant your JavaScript code is executed. This
would not be an issue, except for the complexity of knowing what to do
with the values returned by Java to the browser. DWR solves the
problem by asking for a callback method, to which it will pass the
returned data. The first parameter to any DWR-generated method is
always the callback function.
Above, we created a web page with a JavaScript function that
we've not implemented, until now: the
sendMessage()
event is fired off by the browser whenever the "send" button is
pressed. As you might guess, this is going to call
Chat.addMessage():
function sendMessage()
{
var text = DWRUtil.getValue("text");
DWRUtil.setValue("text", "");
Chat.addMessage(gotMessages, text);
}
The first line gets the value from the input field.
DWRUtil.getValue() works with most HTML elements, so
long as they have an
id attribute (in this case, the
input element has an
id="text").
Next, we use the
setValue() method to blank out the
input element; again, the
setValue() is very smart at
working out what to do with your data and how it should update your
web page with the new data.
Then we call
Chat.addMessage() and ask DWR to
return the list of messages typed by other web users to the
gotMessages() function. It looks like this:
function gotMessages(messages)
{
var chatlog = "";
for (var data in messages)
{
chatlog = "" + messages[data].text +" + chatlog;
"
}
DWRUtil.setValue("chatlog", chatlog);
}
This is where DWR excels. The Java method
Chat.addMessage() returned a
List of
Message objects. DWR has automatically converted this
into an array of JavaScript objects. All we need to do in
gotMessages() is to iterate over the messages array,
getting the text member from each object in the array, and build
some HTML from it. Finally, we push the string we have created into
the
div using the ever-versatile
setValue() method.
And that's it! We have a basic multi-user, web-based chat system
in about 100 lines of code, for both the client and server code.
There are a number of things missing for this to be useful: a
polling method that uses
setTimeout() to call
Chat.getMessages() would keep things flowing a bit
more. The downloaded code contains an extra six lines of JavaScript
to make this happen. We could also add code to only alter the
display if new messages have arrived; this would make for a
flicker-free display. Finally, having a back-off mechanism where
the browsers poll the server less often if nothing much is
happening would be a good idea to prevent swamping the server.
You can see the final version
"">here. It adds
the features listed above, plus highlighting of new messages--all
of which takes an extra 50 or so lines of JavaScript.
It is also worth checking out[YOUR-WEB-APP]/dwr, which is a test
debug page that automatically shows you the classes you have
remoted, and allows you to test their functionality.
Conclusion
Using DWR can make creating cross-browser AJAX websites very
easy, and introduces a very neat way to interact between JavaScript
in the web browser and Java on the server. It is simple to get
started with and integrates well with your current website.
Resources
- Sample code WAR file, including all
of the source.
- The DWR
website
- "">DWR
download pages
- "">
The original article that coined the term
- Google Maps
- Dictionary AJAX
example
- Login or register to post comments
- Printer-friendly version
- 36577 reads | https://today.java.net/pub/a/today/2005/08/25/dwr.html | CC-MAIN-2015-40 | refinedweb | 1,596 | 73.17 |
innstr, instr, mvinnstr, mvinstr, mvwinnstr, mvwinstr, winnstr, winstr - input a multi-byte character string from a window
#include <curses.h> int innstr(char *str, int n); int instr(char *str); int mvinnstr(int y, int x, char *str, int n); int mvinstr(int y, int x, char *str); int mvwinnstr(WINDOW *win, int y, int x, char *str, int n); int mvwinstr(WINDOW *win, int y, int x, char *str); int winnstr(WINDOW *win, char *str, int n); int winstr(WINDOW *win, char *str);
These functions place a string of characters from the current or specified window into the array pointed to by str, starting at the current or specified position and ending at the end of the line.
The innstr(), mvinnstr(), mvwinnstr() and winnstr() functions store at most n bytes in the string pointed to by str.
The innstr(), mvinnstr(), mvwinnstr() and winnstr() functions will only store the entire multi-byte sequence associated with a character. If the array is large enough to contain at least one character the array is filled with complete characters. If the array is not large enough to contain any complete characters, the function fails.
Upon successful completion, instr(), mvinstr(), mvwinstr() and winstr() return OK.
Upon successful completion, innstr(), mvinnstr(), mvwinnstr() and winnstr() return the number of characters actually read into the string.
Otherwise, all these functions return ERR.
No errors are defined.
Since multi-byte characters may be processed, there might not be a one-to-one correspondence between the number of column positions on the screen and the number of bytes returned.
These functions do not return rendition information.
Reading a line that overflows the array pointed to by str with instr(), mvinstr(), mvwinstr() or winstr() causes undefined results. The use of innstr(), mvinnstr(), mvwinnstr() or winnstr(), respectively, is recommended.
<curses.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xcurses/mvinstr.html | CC-MAIN-2016-18 | refinedweb | 300 | 59.23 |
In this article I have explained some important concepts related to the C# language such as Implicit & Explicit type conversion, Boxing and UnBoxing of data types, static and nonstatic methods and the technique of creating automatic methods i.e. automatic method creation.
INTRODUCTION :
In this article I am going to explain some basic and useful concepts that can have tremendous value from a programming point of view in C#. Basically in this article I have explained type of conversion i.e. explicit and implicit, Boxing and Unboxing of data types, Static and NonStatic methods and the technique of creating automatic methods.
A. Type conversion
There are only two methods of type conversions:
1. Implicit conversion: By default conversion is known as implicit conversion which means a lower to a higher conversion is Implicit conversion.e.g. int a=10; long b=a;
2. Explicit conversion: Higher to lower conversion is known as Explicit conversion. Type casting is necessary for Explicit conversion.e.g. int a=10; long b=a; int c=(int)b;
B. Boxing and Unboxing
e.g. int a=10; object b=a;
e.g. int a= 10: object b=a; int c=(int)b;
NOTE : Type casting is necessary for Unboxing.
Take a look at the following example:
int a=10; object b=a; long d=(long)b;
This is not possible because b has an integer value.
NOTE : We can unbox only variables which have previously being boxed.
C. Type of method
For example : Take a console application as follows:
The code looks like:
using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace ConsoleApplication18{class Program{ static void Main(string[] args) { x.show(); x obj = new x(); obj.show1(); Console.Read(); }
} class x
{ public static void show() { Console.WriteLine("Hello"); }
public void show1() { Console.WriteLine("hello1"); }
}}
When we run this program the output would like this:
Hence we can call the show() method with the help of class name whereas the show1() method can be called with the help object because show1() isn't declared as static.
D. Automatic method creation :
We can achieve this with the help following way:
<Select Code - Right Click - Refactor - Extract Method >
It seems as follows :
View All | https://www.c-sharpcorner.com/UploadFile/93126e/some-useful-and-important-concept-of-C-Sharp/ | CC-MAIN-2019-39 | refinedweb | 371 | 56.96 |
Hi,
I am using XALAN C++ 1.8 & Xerces C++ 2.5 for transforming an XML document - with XSL 1.0.
My goal is to implement a date difference function where one date is passed in through my XML and the other date is the current system date.The difference between these two dates in days is returned in the resultant XML file. I have implemented this using a Javascript function inside an msxsl:script block - and when I try to run the transform process, it throws me a XalanXPathException - saying that the namespace provided by me is invalid as the function is not available at that namespace. The transform works fine if I test it in my browser, but when tried with this parser, it fails.
Can somebody please suggest a workaround/fix for this?
The error is as below :
Description [StreamTransform Error: XalanXPathException: The function '' is not available. (, line 543, column 75)] Code [0] Line [107] File [<application path>\src\XSLTransformer.cpp]
View Tag Cloud
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?172029-XSLT-Namespace-URI-Issue&mode=hybrid | CC-MAIN-2015-32 | refinedweb | 185 | 63.7 |
A. Intro
Download the code for Lab 7 and create a new Eclipse project out of it.
Learning Goals for Today
Consider what happens when an error occurs, say, an out-of-bounds array access is made. The program may be allowed to crash. Java also, however, allows a programmer to intercept.
After learning about exceptions, you'll learn about consistency checking, a new technique to help you debug.
B. Exceptions
Error Handling
So far in this course, we have not dealt much with error handling. You were allowed to assume that the arguments given to methods were formatted or structured appropriately. However, this is not always the case due to program bugs and incorrect user input:
-.
Here are some options for handling errors in this situation:
- Don't pass any information to the caller at all. Print an error message and halt the program.
- Detect the error and set some global error indicator to indicate its cause.
- Require the method where the error occurs as well as all methods that directly or indirectly call it, to pass back (or take back) an extra argument object that can be set to indicate the error.
All these options have flaws, which you will now discuss with your classmates.
Discussion: Comparison of Error-Handling ApproachesLink to the discussion
Here are three approaches to error handling:
- Don't try to pass back any information to the caller at all. Just print an error message and halt the program.
- Detect the error and set some global error indicator to indicate its cause.
- Require the method where the error occurs, along with every method that, directly or indirectly, calls it, to pass back as its value or to take an extra argument object that can be set to indicate the error.
Which seems most reasonable?.
When a method throws an exception, that exception must be dealt with by whatever method called exception method. That method can choose to either catch the exception and handle it, or it can choose to throw the exception again, passing it on to the method that called it.
Think of a hot potato. When a method generates an exception, it doesn't want to deal with it, so it throws the exception to the next method. That method must do something when it receives the exception. It can either handle it, or throw it to the next method. Methods keep throwing the exception until one is willing to catch it. If no methods end up catching an exception, then the Java program will look to to more drastic exception handlers, which may do things like exit the program and print a stack trace to the user. Before, we've called this behavior "crashing" the program.
There is a wide spectrum of "exceptional" situations. There may be catastrophic situations like the computer powering off. On the other hand, there are situations that might not even correspond to errors, like encountering the end of an input file. In between are errors varying in severity.
Exceptions in Java
Java's exception facility classifies all these into two categories:
- checked exceptions: exceptions that a method must explicitly handle or hand off to its caller
- unchecked exceptions: exceptions that a method need not worry about
The requirement for handling checked exceptions may encourage programmers into thinking about potential sources of error, with the result that more errors are detailed and handled correctly.
Exceptions are just Objects and are part of the regular class hierarchy as shown below.
All exceptions in Java are subclasses of the
Exception class.
An exception is thrown when the exceptional event occurs. Normally, an exception stops the program. One may, however, choose to catch the exception and process it in such a way that the program can continue executing.
Catching Exceptions in Java
Catching an exception means handling whatever exceptional condition has arisen.
This is done by surrounding code that might produce exceptional conditions
with a
try catch block as follows:
try { // code that might produce the exception } catch (exceptionName variableName) { // code to handle the exceptional situation }
An example of where an exception is particularly useful is dealing with user
input. The
AddingMachine program we saw in an earlier activity successively
read integers from the input. Users tend to be imperfect; they might mistype
the input. Thus, we need to be careful to make sure that they actually enter
a number to be added. The
Scanner class helps with this as its
nextInt
method throws an exception if the token being read isn't an integer.
Scanner intScan = new Scanner(System.in); int k; try { k = intScan.nextInt ( ); } catch (NoSuchElementException e) { // ran out of input } catch (InputMismatchException e) { // token isn't an integer }
Observe that the "tried" code can be simpler since it is coded as if nothing will go wrong. You can catch multiple exceptions, as in the code above, and handle them separately by supplying multiple catch blocks (ordered from most specific to least specific).
Generate Some Exceptions
Fill in the blanks in the code below (which is also in
TestExceptions.java) so that, when run, its output is:
/** * Desired output * 1) got null pointer * 2) got illegal array store * 3) got illegal class cast */ public class TestExceptions { public static void main (String [ ] args) { ________________ ; try { ________________ ; } catch (NullPointerException e) { System.out.println ("got null pointer"); } try { ________________ ; } catch (ArrayStoreException e) { System.out.println ("got illegal array store"); } try { ________________ ; } catch (ClassCastException e) { System.out.println ("got illegal class cast"); } } }
Hint: If you're not sure what kinds of errors these exceptions are for, you can look them up in the Java documentation. If you don't happen to remember the link, googling the name of the exception will likely turn it up.
Throwing Exceptions
If your code may produce a checked exception, you have two choices. One is to
catch the exception. The other is to "pass the buck" by saying
throws exceptionName in the method header. This puts responsibility on the
calling method either to handle the exception or pass the exception to its
caller.
To throw an exception, we use the
throw operator and give it a newly
created exception as argument. For example, if a scanned value must be
positive, we could have the following:
k = intScan.nextInt(); if (k <= 0) { throw new IllegalArgumentException("value must be positive"); }
The programmer can easily define his or her own exceptions by extending
Exception or
RuntimeException. Each has two constructors to override,
one with no arguments, the other with a String argument in which an error
message is stored. Here's an example.
class Disaster extends RuntimeException { public Disaster() { super(); } public Disaster(String msg) { super(msg); } }
In the exception-catching code, we may access the
String argument to the
Exception constructor via the
getMessage() method.
Example: Time Input
The code at the bottom of the page is also in
Time.java.
It represents a time of day in military time, storing a number of hours
that's between 0 and 23 and a number of minutes between 0 and 59.
Here is a method for testing the constructor, suitable for pasting into a JUnit file.
public void testConstructor() { String[] timeArgs = {null, "x", "x:", ":x", "x:y", "1:", ":30", "4: 35", "55:00", "11:99", " 3:30", "00004:45", "4:007", "4:7", "4 :09", "3:30", "11:55"}; Time[] correctTimes = {null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, new Time (3, 30), new Time (11, 55)}; for (int k = 0; k < timeArgs.length; k++) { Time t = new Time(timeArgs[k]); assertEquals(correctTimes[k], t); } }
Now do the following:
Modify the
testConstructormethod:
- Surround the call to the
Timeconstructor with
try catch. In the catch block, print the corresponding error message, and make sure that the corresponding
correctTimesentry is
null, that is, the constructor call threw an exception as it was supposed to.
- If the constructor call didn't throw an exception, compare the constructed result with the corresponding
correctTimesentry as in the existing code.
- Within the
Timeconstructor, catch all exceptions that arise and handle them by throwing an
IllegalArgumentExceptionwith an informative message string.
Our solution produces eight different error messages for the given test cases. Cases for illegal times must be tested separately:
- too many leading zeroes in the hours or minutes (e.g. "00007")
- values for hours and minutes that are out of range.
You should add the tests for these cases and throw
IllegalArgumentException
with informative message strings.
public class Time { private int myHours; private int myMinutes; public Time(String s) { int colonPos = s.indexOf(":"); myHours = Integer.parseInt(s.substring(0, colonPos)); myMinutes = Integer.parseInt(s.substring(colonPos+1)); } public Time(int hours, int minutes) { myHours = hours; myMinutes = minutes; } public boolean equals(Object obj) { Time t = (Time) obj; return myHours == t.myHours && myMinutes == t.myMinutes; } public String toString() { return myHours + ":" + myMinutes; } }
C. Consistency Checkers
Setting Up Projects
The rest of the lab will involve four main Java files: Date.java, NumberGuesser.java, XsBeforeOs.java and InsertionSort.java. They are located in the lab07 folder.
If you haven't yet switch which partner is coding in this lab, do so now.
Test Driven Development
Each of the tests will check many different cases, especially edge cases. They are all already written for you. This style of workflow, where tests are written before the central code, is called test driven development (it has been mentioned in earlier labs). For the purposes of this lab, do not look at the test files before attempting to complete the method.
Viewing the test cases before writing your code may affect your thinking and cause you to miss thinking about edges cases that might not be tested in the tests. The tests are provided as a black box, intended to check your work while not influencing your code.
If you need help, ask your partner, fellow students, or the staff for help. View the tests only after you have debugged and completed your own work. Viewing the tests after the fact can give you insight into how JUnit 4 tests are constructed. In particular, these tests will show you have you can test that methods will throw exceptions.
Detecting Program Errors
In previous labs, we focused on designing test cases to check results for our code. This is not always the best and most thorough way to look out for bugs, however. Even for simple classes, it is difficult to design a thorough test suite. Secondly, not all aspects of a class (e.g. private instance variables and methods) are accessible to a separate testing class.
In this section, we will introduce the concept of using self-monitoring methods to alert when bugs appear. We will tie the concept of invariants with exception throwing, catching, and handling to check an object's internal state for inconsistencies.
Example: Inconsistencies in Tic-Tac-Toe Boards
Consider a class that represents a board used to play the game of Tic-Tac-Toe, which is played on a three-by-three grid by two players, X and O. Players take turns putting their mark on an empty square of the grid, with X going first. The winner is the player that first puts marks in three horizontal cells, three vertical cells, or three diagonal cells in a row.
Not all possible arrangements of X's and O's in a 3x3 grid will occur in a legal Tic-Tac-Toe game. For example, a board containing nine X's will never appear. Inconsistencies in a Tic-Tac-Toe playing program would thus be represented by impossible boards. The consistency checker would check for these impossible boards, and signal an error if any of them arise.
Discussion: Internally Consistent BoardsLink to the discussion
What kinds of Tic-Tac-Toe boards are legal?
Consistency Checker for Tic-Tac-Toe Boards
In a Tic-Tac-Toe playing program, we might see the following pseudocode:
Process an X move. Check the board for consistency. Repeat the following: Process an O move. Check the board for consistency. If O just won, exit the loop. Process an X move. Check the board for consistency. If X just won, exit the loop.
The consistency checker is called immediately after executing an operation that changes the board. We won't be implementing a consistency checker for Tic-Tac-Toe, but we will be for other problems in this lab.
In subsequent discussion, we will use the name
isOK to refer to the
consistency checking method. We can combine this concept of consistency
checking with what we learned previously in this lab about exceptions.
If the state of the object being checked is internally consistent, the void
isOK method should not do anything. However, if the object being checked
is not internally consistent,
isOK should throw an exception that can
be handled by the method that called it or by our tests. Usually, when you
throw an exception, you should also print an informative error message that
can be used with debugging.
Example: Consistency Checker for Dates
Consider the representation of a date (year, month, day) as three integers:
- year: value between 1900 and 2100
- month: value between 1 and 12 (inclusive), with 1 representing January, 2 representing February, etc
- day: value between 1 and 28, between 1 and 29, between 1 and 30, or between 1 and 31, depending on the month and year
The three integers are the instance variables of a
Date class that we are
providing you. The methods include a constructor and an incomplete
isOK.
Open the
Date.java file and fill out the
isOK method that throws an
IllegalStateException (which is built into Java) if the instance variable's
values do not represent a legal date in a year between 1900 and 2100, inclusive.
Test your work by compiling and running the
DateTest.java file. It should
pass all the tests.
Example: Efficient (Though Buggy) Number Guessing
The
NumberGuesser class uses the binary search technique to guess the
user's number. Binary search starts with a range of values in sorted order—
here, the integers between 0 and 20 (inclusive). To find a particular value, we
first look at the middle value in the range. If the middle value is not what
we're looking for, we check whether what we want is higher or lower than the
middle value. If it's higher, we don't need to search the lower values; if
it's lower, we don't need to search the higher values.
This elimination of (roughly) half the candidates from contention with each iteration yields a much faster algorithm than searching all the candidates one by one with a linear search.
Here's a sample dialog with the program.
Please think of an integer between 0 and 20 (inclusive). Is your number 10? (Type y or n.) n Is 10 too high? (Type y or n.) y Is your number 5? (Type y or n.) n Is 5 too high? (Type y or n.) n Is your number 7? (Type y or n.) n Is 7 too high? (Type y or n.) n Is your number 8? (Type y or n.) y Got it!
The code uses variables
low and
high to keep track of the range of values
that are currently candidates for the secret number.
low starts out at 0 and
high at 20, reflecting the assumption that the user's number is somewhere
between 0 and 20 (inclusive). At each guess, the program shrinks the range of
values that can contain the user's value, either by lowering high (and
discarding large values) or by raising low (discarding small values).
The program includes a consistency checker
isOK method that makes sure that
0 <=
low <=
high <= 20, and each unsuccessful guess removes the guess from
consideration for the duration of the game.
The program has a bug, however. The number-guessing code and the
isOK
method are inconsistent. You'll identify the bug in the next step.
On a side note, notice that this particular
isOK method is not
void but
rather returns a boolean. This is also a possibility you could use when
working with your own consistency checkers.
#### Self-test: Identifying the Inconsistency
Some of the statements in the
NumberGuesser program are numbered (1, 2, etc.). One or more of the numbered statements is buggy. Luckily,
isOk will catch the error, and alert you when something goes wrong. Identify which statement is the problem.
Discussion: Fixing the BugLink to the discussion
First fix the statements you just identified. Now the
isOk
check should never fail (assuming the user does not lie to the program). Then briefly explain what you fixed and why it solves the problem.
D. Invariant Relationships
Invariant Relations
A more formal term for the relationships between variables that our
isOK
methods are verifying is invariants. "Invariant" means "not varying" or
"not changing". There are two kinds of invariant relationships:
- Class invariants relate values of instance variables to one another. These invariants are also known as representation invariants or data invariants. The "not varying" property is set up initially in the class constructors, and should also hold between calls to the other methods. Inside a method call, the invariant relationship may get temporarily invalidated, but it's restored by the time the method exits.
- Loop invariants relate values of variables in a loop or recursion. The "not varying" property is set up the first time at the start of a loop or at the initial call to a recursive method, and should hold at the end of each loop iteration. Within the loop, the invariant relationship may get temporarily invalidated, but it's restored by the end of the iteration.
The Tic-Tac-Toe board consistency checker contains a class invariant. The board class invariant relates the number of X's and O's in the board. After each move, the number of X's or O's will change, but the relationship between them will still hold (not more O's than X's, and no more than one X more than the number of O's).
The date example also contained a class invariant because the date invariant
relates the year, month, and date-in-month. Updating a date object with
something like a
setToTomorrow method (code below) may temporarily invalidate
the relationship, but the relationship will be restored prior to exiting
the method.
public void setToTomorrow ( ) { myDateInMonth++; // This may invalidate the invariant relationship // if tomorrow is the first day of the next month. if (myDateInMonth > monthLength (myMonth)) { myMonth++; if (myMonth == 13) { myMonth = 1; } myDateInMonth = 1; // restore the invariant relationship } }
The buggy number guesser contains an example of a loop invariant. The invariant related the value-to-be-guessed to the range of values that could contain it, and to the sequence of previous guesses. (The bug in the code resulted from incorrectly maintaining that relationship.)
Loop Invariants with Array Processing
Here is a common invariant pattern that shows up in array processing. The
processing involves a loop. In this case, the array is called
values:
for (int k = 0; k < values.length; k++) { Process element k of values. }
Often the processing of element k consists of including it somehow among elements 0, 1, ..., k–1. The loop invariant property says something about the first k elements or elements 0, 1, ..., k. Thus the invariant pattern is:. }
Loop Invariant Example: Moving X's to Precede O's
Suppose we have a character array that contains X's and O's, and we want to rearrange the contents of this array so that all the X's precede all the O's, as shown in the example below:
One way to do this is to loop through the array with an index variable named
k, maintaining the invariant that all the X's in the first k elements precede
all the O's. We must also keep track of the position of the last X among
those elements; we'll call that
lastXpos. Each loop iteration will start by
examining element
k. If it's an O, the invariant is easy to extend, as shown
below.
The more complicated case is when element
k is an X. To restore the
invariant, we exchange element
k with the position of the first O, as in
the following diagram.
Incidentally, a similar algorithm is used in the Quicksort sorting method that will be covered later in this course.
In the
XsBeforeOs class, fill out the
isOK method. Given a
char array
values and an index variable
k,
isOK should check that in the first
k
elements of
values all the Xs precede all the Os. If this consistency
check is not satisfied,
isOK should throw an
IllegalStateException,
which is built into Java.
After you have completed this method. Compile and run using the instructions provided earlier in the lab. Your code should pass two tests.
Now complete the
rearrange method based on the framework given in the
XsBeforeOs class. After completing this method, remove the two
@Ignore
annotations before the two rearrange tests in
XsBeforeOsTest.java. Compile
and run again. This time, your code should pass all four tests and should not
be printing any error messages.
Exercise: Insertion Sort
Here's another application of the same pattern to sorting the elements of an array. The algorithm is called insertion sort. We will revisit it later. Pseudocode appears below.
for (int k = 1; k < values.length; k++) { // Elements 0 through k-1 are in nondecreasing order: // values[0] <= values[1] <= ... <= values[k-1]. // Insert element k into its correct position, so that // values[0] <= values[1] <= ... <= values[k]. ... }
Here's how insertion sort is supposed to work. At the start of the kth time
through the loop, the leftmost
k elements of the values array are in order.
The loop body inserts
values[k] into its proper place among the first
k
elements (probably moving some of those elements up one position in the array),
resulting in the first
k+1 elements of the array being in order. The table
below shows a sample array being sorted.
Open
InsertionSort.java. While this class compiles as we've presented it to
you, it doesn't yet work correctly. The bodies of the methods
insert and
isOK are both missing.
Before filling out the
isOK and
insert methods, open
InsertionSortTest.java
and create three additional tests. Examples are given for you. After you have
finished writing your tests, continue to fill out the
isOK and
insert
methods.
Given an array of ints named
list and an index
k into the array,
isOK
should throw an IllegalStateException when elements
0 through
k of list
are not sorted in increasing order. It does nothing if they are sorted
correctly. In the case where
k is negative or
k exceeds the maximum list
index,
isOK should also throw an exception.
Also fill in the body for the
insert method, which takes the kth element
of a array and inserts it correctly into the array so that the first k + 1
elements are sorted correctly.
After you have filled out both methods, compile and run the tests. You should pass all six tests, three of which were provided by us and three of which were written by you and your partner.
E. Conclusion
Summary
The lab activities for today included two applications of exceptions. One is checking user input in multiple fields for correctness. Our solution involves cascading tests, first for
null input, then empty input, then incorrectly formatted input (too few or too many fields), then incorrect values within a field.
The other is checking internal state of an object for consistency. We saw several examples of "isOK" methods, which check that internal information for the objects is logically consistent. Some of these checks involve class invariant relations (e.g. in the
TicTacToe and
Date classes), while others involve loop invariant relations (the number-guessing code, the "moving X's to precede O's" code, and insertion sort). We observed a pattern for loop invariants among array values:. }
Readings
Read the following:
- HFJ chapter 10, pages 539-545, 548, and 568-575
Submission
Files to submit for this lab:
- TestExceptions.java
- Time.java
- Date.java
- NumberGuesser.java
- XsBeforeOs.java
- InsertionSort.java
- InsertionSortTest.java
Submit these files as
lab07.
In addition, please fill out this self-reflection form before this lab is due, as a part of your lab assignment. Self-reflection forms are to be completed individually, not in partnership. | http://inst.eecs.berkeley.edu/~cs61bl/su15/materials/lab/lab07/lab07.html | CC-MAIN-2018-05 | refinedweb | 4,081 | 63.9 |
You can customize Team Foundation Build by creating your own custom tasks that run during a build. This topic explains the steps that you must follow to customize a Team Foundation Build build definition with a task that generates build numbers.
Prerequisites
Before you create the task to customize build numbers, you must have the following:
Access to the TFSBuild.proj file of the build definition you want to customize.
The TFSBuild.proj file can be associated with more than one build definition.. By default, the TFSBuild.proj file is located in the folder $/MyTeamProject/TeamBuildTypes/MyBuildName in Team Foundation version control. MyTeamProject is the name of your team project and is the root node of all your team project sources. MyBuildName is the name that you gave to the first build definition that is associated with the TFSBuild.proj file. For more information about how to create Team Foundation Build build types, see How to: Create a Build Definition.
When you customize the TFSBuild.proj file, you customize each build definition associated with it.
A local workspace that contains your team project files and the build files on your local computer.
For more information, see How to: Create a Mapped Workspace and How to: Get the Source for Your Team Project.
Required Permissions.
To perform this task, you must have the Administer a build and Administer workspaces permission set to Allow. You must also have the Check in and Check out permissions set to Allow. For more information, see Team Foundation Server Permissions.
To write your task, you can either implement the ITask interface directly, or derive your class from a helper class Task. ITask is defined in the Microsoft.Build.Framework.dll assembly and Task is defined in the Microsoft.Build.Utilitites.dll assembly.
To customize the build number that is generated by Team Foundation Build, you must insert your task into the BuildNumberOverrideTarget target. BuildNumberOverrideTarget requires an output property called BuildNumber. The Output attribute indicates that the property is the output of your custom task. For more information about Team Foundation Build targets, see Customizable Team Foundation Build Targets.
Create a Visual C# class library called MyTask that contains your custom task.
For more information, see Component Classes.
On the Project menu, click Add Reference, and select Microsoft.Build.Framework and Microsoft.Build.Utilities from the Add Reference dialog box.
Insert the following code to the class.cs file.
This example inherits from the Task helper class and uses the DateTime properties UtcNow and Ticks to generate the build number.
using System;
using Microsoft.Build.Utilities;
using Microsoft.Build.Framework;
namespace BuildNumberGenerator
{
public class BuildNumberGenerator:Task
{
public override bool Execute()
{
m_buildNumber = DateTime.UtcNow.Ticks.ToString();
return true;
}
private string m_buildNumber;
[Output]
public string BuildNumber
{
get { return m_buildNumber; }
}
}
}
Build your class library to produce MyTask.dll.
Copy the built DLL to the local workspace folder that also contains the TFSBuild.proj file of your build definition.
You must have mapped the source control location of the TFSBuild.proj file to your local workspace before this directory structure exists on the client computer. For more information, see
How to: Get the Source for Your Team Project.
If your TFSBuild.proj file was stored in the default folder in source control, the local copy of the file is located in <root>:\Local Workspace\TeamBuildTypes\MyBuildName on the client computer. Local Workspace is the local folder to which your team project is mapped, MyTeamProject is the name of your team project and MyBuildName is the name that you gave to the first build definition that is associated with this TFSBuild.proj file.
After you have created the DLL that contains your custom task, you must add it to Team Foundation version control. You can use the tf add and tf checkin commands to add and check in the DLL to the same location as the TFSBuild.proj file of your build definition. For more information, see Add Command and Checkin Command.
Click Start, point to All Programs, Microsoft Visual Studio 9.0, Visual Studio Tools, and then click Visual Studio 2008 Command Prompt. Open the local workspace you have mapped for the team project that contains the build type you want to customize.
For example, type the following at the command prompt.
> cd c:\MyTeamProject
Where MyTeamProject is the name of your team project.
Move to the location where the TFSBuild.proj file is stored.
c:\MyTeamProject>cd TeamBuildTypes\MyBuildName
Where MyBuildName is the name of the build definition.
To add the file to Team Foundation version control, type the following command.
c:\MyTeamProject\TeamBuildTypes\MyBuildName> tf add MyTask.dll
To check in your file to Team Foundation version control, type the following command.
c:\MyTeamProject\TeamBuildTypes\MyBuildName> tf checkin MyTask.dll
You can also use the Team Explorer to add your DLL to Team Foundation version control. For more information, see How to: Add a Project or Solution to Version Control.
After you have created your task, you must register it by specifying your task in a UsingTask element in the TFSBuild.proj file. The UsingTask element maps the task to the assembly that contains the task's implementation. For more information, see UsingTask Element (MSBuild).
Start Visual Studio.
Check out the TFSBuild.proj file that you want to modify from Team Foundation version control and open it in the Visual Studio XML-editor.
Add the UsingTask element to the TFSBuild.proj file immediately after the import statement.
<UsingTask
TaskName="BuildNumberGenerator.BuildNumberGenerator"
AssemblyFile="MyTask.dll"/>
To insert your task into the BuildNumberOverrideTarget target, add the following XML, enlosed within the <Target></Target> tags, to the end of the TFSBuild.proj file.
</ItemGroup>
<Target Name = "BuildNumberOverrideTarget" >
<BuildNumberGenerator>
<Output TaskParameter="BuildNumber" PropertyName="BuildNumber"/>
</BuildNumberGenerator>
</Target>
</Project>
Click File, click Save to save your changes, and then close TFSBuild.proj.
You will receive XML-schema warnings after you make these changes to the TFSBuild.proj file. You can safely ignore those warnings.
Check TFSBuild.proj back in to source control.
After you have modified the TFSBuild.proj file and saved the changes in Team Foundation version control, run the build definition.
For more information, see How to: Queue or Start a Build Definition.
You can view the custom build number in the Build Explorer. For more information, see How to: Monitor Build Progress.
<
<
<
<
<ItemGroup>
<
</ItemGroup>
<
<
<
</Target>
<
<
<
</GenerateBuildNumber>
</Target>
<Our server has an environment variable TFSTEMP defined which points to the location where files are retrieved during the build.Store your dll in source control at TeamBuildTypes\Bin\My.Tasks.Dll | http://msdn.microsoft.com/en-us/library/aa395241.aspx | crawl-002 | refinedweb | 1,087 | 50.33 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
In PLDI this year: Ben Titzer, "Harmonizing Classes, Functions, Tuples, and Type Parameters in Virgil III" [pdf].
I don't agree with the part on variant types (3.5, Emulating Variant Types, page 6). More precisely, I don't agree that what is presented deserves the name "variant". It shows an example of using dynamic type checking for control flow, but in my book when you have none of readability, type safety and exhaustivity checking, you don't have proper variants.
Other than that, I have an "ideological" disagreement with the idea of including runtime type dispatch as a pillar of your programming language, so I'm quite sure that I would not like Virgil as a language, but I am sympathetic to the methodology of writing about "small" languages and discussing their upsides and downsides in a honest way.
I think not having sum types in your new language is a big design mistake, and it seems that the subtyping relation of the language is flawed (if I skimmed the paper correctly, all class parameters are invariant). That would be my two pet peeves with the current realization.
FWIW I added a bonafide implementation of variant types in version III-2.0586 (i.e. about a year ago), complete with pattern matching and exhaustivity checking for match statements. Variants are implemented underneath the hood as objects allocated on the heap, but they don't have semantically visible identity; equality is checked with deep comparison. This makes an optimized implementation of variants with bit-packed value representations completely transparent.
You have some valid points. I agree that variant types are an important language feature, and am certainly not arguing against including them in a language. In fact, variants and pattern matching are the next major feature I hope to add. They will simplify several messy patterns at the cost of language (and therefore, implementation) complexity. But that wasn't really the point of the paper. We can always add more features to a language. But in what order? What should we try to get right first? I discovered that once I got these classes, functions, tuples, and type parameters to work well together, then I could actually use them in interesting patterns to emulate other language features. The emulations give you some of the bang of the feature, without paying in language complexity. When bootstrapping language, it allows the designer time to separate concerns, get features right independently, and make more rapid progress. Of course those emulations have various tradeoffs--in Virgil, one of those tradeoffs is sometimes using dynamic type tests.
As to your specific point about variant types, the pattern as presented in the paper is both similar and different from variant types. On one hand, you don't have exhaustivity checking, and pattern matching is a little noisier than with first-class language support. On the other hand, it admits new variants at creation sites (i.e. the set of variants is not bounded); and of course, one needn't necessarily use pattern matching at all, but can instead add more functions to the variant that essentially do dispatch themselves. This is, for example, what I do in my compiler backend. Maybe the name is not quite right, but I am not aware of how one would do a similar kind of pattern in other languages.
As to variance for class type parameters, I did touch on the subject briefly in the section titled "Type Variance". Suffice is to say that invariance was not an accident, and that variance for functions often makes up for the deficit.
[edit: this was supposed to be in reply to gasche]
One more polite way to formulate my disagreement, that I should have thought about first, is the following. about the Oz language). Interface Adapters are also quite believable. Ad-Hoc polymorphism is a bit cringey, but ad-hoc polymorphism generally is anyway. The variant types are ill-served by this contrast: I don't think they work, while the other *do*, so by this point of your article I was used to "surprisingly good" emulations.
In my experience, a good exhaustivity check is "orders of magnitude" more helpful for programmer productivity (both in writing bug-free code *and* in refactoring code afterwards) than any kind of extensibility of variants that will be used in much rarer cases -- it's still useful, but not in the same league.
It seems another language is coming out from the Google factory, right?
My understanding (but Ben can correct me) is that Virgil is not a "Google language", it's just a language that happens to be created by someone who works for Google.
Factor and Magpie (mine) are other languages that are created by Googlers but aren't from Google itself. Google just happens to be quite friendly towards extra-curricular projects.
Given that Google has many PL PhDs and other enthusiasts, not all of them are working on language design as their full time job. Affiliation listed on a paper then becomes a bit weird, but not by much.
Reality is, there are few--probably zero--places in the world that are going to pay you to design your own language for your own purpose. So one has to make some compromises; maybe work on another language project that is close to your ideas, maybe contribute some language ideas to an existing language, maybe work on a different system in a different area of CS altogether. I think doing some of each of these is instructive. One needs some exposure to real-world problems and to see how languages fail. If you can palate the ickiness of writing, maintaining, debugging, or extending a mountain of Java or C++ code, there are a lot more job prospects, and some of that experience may transfer into the language.
I chose to work on a non-PL project starting off at Google and spent 3 years there. It gave me a perspective on debugging and logging in the large that I don't think I would have gotten elsewhere. Now I'm on the V8 team, and I'm happy to work on compilers as my FTJ, even if both the implementation language (C++) and the target language (Javascript) are both losers. There are other language projects around at Google, but I'm happy to keep chugging along in this PL direction.
Reality is, there are few--probably zero--places in the world that are going to pay you to design your own language for your own purpose.
Well I got lucky, and it helps to be in research (vs. the Google PhD super-dev). Unfortunately, such jobs are just becoming more rare.
Reality is, there are few--probably zero--places in the world that are going to pay you to design your own language for your own purpose [all of the time].
The more classic compromise is life as an academic: teaching and managing research in trade for time to pursue personal projects. But the ratio of graduating PhDs to available positions seems to tend towards zero over time. Oddly enough the trade-offs inside the ivory tower are remarkably similar: contributing techniques to projects in related areas / seeing completely different areas of CS. They also bring the same benefits in terms of cross pollination.
It gave me a perspective on debugging and logging in the large that I don't think I would have gotten elsewhere.
You are in a place that seems uniquely suited to that experience (thinking back to Matt Welsh's blog posts). Speaking of which, your paper looks interesting so it's landed on the "to read" pile. Hopefully I will have some more on-topic comments later. After a quick look though I do already have a naive question to ask: about 70% of the material covers a description of novel aspects of the language, less than 30% evaluates the impact of those aspects. The evaluation is in the form of analytic discussion rather than empirical results.
Although that split seems pretty standard for a paper introducing a novel language, it leaves me curious: how much is that split a result of the way that you wanted to write the paper, and how much is an artifact of writing it in a way that will get it published? In particular, what kinds of useful knowledge arise from looking at large-scale uses of an existing language that are difficult to communicate / shoe-horn in the standard template for a research paper.
Empiric evaluation would be nice, but unfortunately, nobody knows how to empirically evaluate programming languages in a way that (1) is not limited to trivialities (like surface syntax), (2) properly eliminates the gazillion of irrelevant variables (like bias from predominant practice), and (3) is practical.
Are you sure that is actually a conjunction and not a disjunction?
Empirical evaluation does seem to be difficult, and the relative lack of results that I've seen is the main motivation for asking. The difficulty seems to be related to the type of question / property that is being evaluated. As in any other field, trying to evaluate a poorly chosen property will lead to weak results. Some poor examples that have been tried include:
* does language X make a programmer more productive (specifying which programmer is one of the gazillion variables)?
* does language X make code better (specifying better hits both problems (2) and (3) at once)?
Some good examples (off the top of my head as skimming the last couple of years of PLDI will take too long):
* does language X permit efficient compilation?
* does language X lead to a tractable analysis of Y?
* how compact is language X, i.e. given common idiom Y how many ways can it be expressed (the Python criteria)?
* can we reduce the language size by simulating feature X with feature Y (attempting to veer ever so slightly back on topic)?
In general "weaker" results are easier than "stronger" results, i.e. does there exist a program in language X with property Y rather than does every program in language X meet property Y. But those weaker results can still be evaluated empirically, I believe that your problems apply more to stronger results.
I'm still curious though - let's say that you were in a position where you had a large corpus of test programs, like .... all the javascript that V8 runs on. What kind of empirical experiments are enabled by a corpus of that size? Can you treat it as a representative sample of programs (yes, stepping over the statistical can of worms for that term) and do things that are not possible just using a handful of handcrafted examples?
* does language X permit efficient compilation?
That's more an evaluation of compiler technology than language design.
* does language X lead to a tractable analysis of Y?
Not sure what you mean by that, or how to evaluate it empirically.
* how compact is language X, i.e. given common idiom Y how many ways can it be expressed (the Python criteria)?
* can we reduce the language size by simulating feature X with feature Y (attempting to veer ever so slightly back on topic)?
These are both questions about orthogonality. It's a good criterium, but one that you have to evaluate through analysis, not empirically.
There may be interesting phenomena to observe that way (although I'm skeptical that it can give you more than vague data points). But such an analysis is impossible for a new language. And certainly, anything along these lines is far outside the scope of a paper about such a topic anyway, wouldn't you agree?
The empirical equivalent of those analytical questions is, "how easily does a person learn how to do X?" ie.
* does language X permit efficient compilation?
becomes:
* how easily does a new developer learn how to express efficient programs in language X?
It's not clear that analytical simplicity always translates into end-user understandability.
* does language X lead to a tractable analysis of Y?
Not sure what you mean by that, or how to evaluate it empirically.
Sadly I didn't see that the word property was missing on a proofread. What I had in mind was static analysis, but I see that Ray has covered that issue far more comprehensively below when discussing safety.
It's a good criterium, but one that you have to evaluate through analysis, not empirically.
Not necessarily... an explanation requires some fluffy handwaving here to construct an argument. By idiom I mean a common technique for implementing something (or other). Analysis of how many ways that can be expressed in a Turing Complete language will be infeasible. But there is a coarse approximation that would be applying a brute-force enumeration counting the ways using n-symbols. Of course this is still not accurate as deciding if the string of n-symbols implements the given idiom would still be intractable, but assume some approximation is chosen and the result evaluated.
Would that be an analytic result, or an empirical experiment? My interest is essentially in where we draw the line between empiricism and analysis in a field that consists mainly in symbol manipulation. But, apologies to Ben as I was initially curious about this issue in the context of evaluating Virgil, but I've clearly wandered quite far on a tangent so I think I will stop here.
How should we evaluate programming languages and research on programming languages?
...and I don't have a good answer either. :(
Empirical evaluations have value, but are not in themselves the kind of evaluations that we as human beings are likely to be satisfied with.
An empirical evaluation would ask the same series of specific questions about every programming language, and restrict itself to questions that can be unambiguously answered. That would mean things that can be known or counted or measured.
For example, if we wanted to evaluate "expressiveness" we would be asking a long series of specific questions about whether particular constructions from other languages can be expressed with strictly local transformations (where 'strictly local' has a specific meaning such as 'with an isomorphic Abstract Syntax Tree') and if not, whether it is possible to define routines/transformers elsewhere that make such local transformations possible.
Similarly, we could evaluate "safety" with a long series of specific questions about what if anything we are able to prove in advance of running the program or in advance of running particular parts of the program, whether runtime data can undefine/redefine runtime semantics such as procedure calls/returns (cf. stack stomping and execution of stack-allocated or malloc'd buffers), whether 'junk' data is ever visible in the values of uninitialized variables, whether writes to arrays are bounds checked, etc.
But that series of answers, however valuable it may be, is never going to satisfy people. Certain kinds of 'expressiveness' actively limit 'safety' for example, and people will prefer one over the other to different degrees. If you assign an aggregate score for either quality or both based on some scoring algorithm, the scoring algorithm itself is necessarily something that people can disagree about. IE, given the algorithm and the answers to the empirical questions, you can empirically determine the score, but there is no set of facts from which you can empirically determine a 'best' scoring algorithm.
seems like yes i'd like to be able to have the raw #s for languages, and then people can just say, personally i like scoring more for expressiveness than safety, and i can say i'm the other way 'round a little bit.
at the moment, most people talk about programming languages w/out talking about these concepts for the most part; most programmers don't have the words or training or learning or experience to talk about things in such ways, leading to lame flame wars, and excessive use of python (shudder).
Sure, we could ask a series of questions about what properties the language has--how easy it is to implement that is or that anticipated (i.e. step 0) feature or pattern or construct. That is only an interesting question up to a point. At some point after that, the inductive features of the language must take over. We cannot possibly imagine all possible programs or all possible patterns, therefore we must be able to design languages that have positive combinatoric properties. Languages must have building blocks that can be fit together in new and interesting ways. Rough edges make combining constructs harder.
That is one of the ideas I am trying to explore here.
Unfortunately, even if we were able to talk about empirically evaluating the step 0 constructs, an empirical evaluation of the combinatoric properties of a language seems even harder! After all, there is no way to say what combinations are the most useful or the most likely to be used ahead of time. We can only talk about and show examples of how a language is modular and constructive in this fashion.
This is what I tried to do in this paper--as honestly and as straightforwardly as possible. There are tradeoffs when going for combinatorics versus breadth in features.
To the point of evaluation. Originally I tried to publish a version of this paper that contained benchmark results showing that Virgil is competitive in performance (even superior in some cases) with other languages like Java and C#, as well as benchmarks that showed that the normalization (flattening) of tuples makes a huge difference in performance, an implementation choice that, while technically orthogonal from the design aspect of integrating tuples and functions, is of enormous practical importance, since I believe that slow language constructs breed some distrust in programmers and lead them to handicap themselves when approaching design problems. The feedback I got from program committees is that an experimental results section is a wondrously entertaining and contentious bikeshedding experiment and a complete distraction from the design contributions of a paper. It is an easy way to earn a rejection from a disgruntled PC member in the name of "high standards". Experimental results sections rarely please all reviewers. In response I ripped it out; in a strange way it led to the purer, cleaner, benchmark-free version of the paper given here. Not that I wouldn't enjoy writing up some detailed performance comparisons of Virgil versus other languages. But that is another day....
ppig is concerned with such questions.
Virgil is what C# should have been. The core choices are exactly what I'd expect of an OO-FP hybrid with parametric polymorphism, and I think the choices almost universally make perfect sense. C# has many of the same mechanisms, ie. value types, delegates, generics, but their integration is unnecessarily cumbersome and inexpressive by comparison.
The only choices I'm not convinced of are the use of classes which necessitates partitioning the function and method space, although you do unify them nicely again. C#'s extension methods were a nice addition because they provided a means to implement method chaining as ad-hoc "methods", but it glaringly points out the flaws of separating methods and functions. I fear something similar will befall Virgil. For instance, something like LINQ in Virgil seems like it might be somewhat cumbersome to define and use.
The only other choice I dislike in OO languages are the separate and unnecessarily verbose type-test and type-cast operators, which require the programmer to manage his own scoping. I would much prefer a single operator that performed a type-test-and-cast while also introducing a scoped variable to which the cast value is bound if successful (like deconstruction via pattern matching). This pattern is almost always what you end up doing with type tests and casts, so it should really be concise. Perhaps syntactic sugar like:
when (var i = int.!(a)) printInt(fmt, i);
I just noticed that while Virgil supports first-class functions, I don't see any mention of anonymous functions. Ah, I see from the wiki that this feature is schedule for a future version.
so F# isn't what C# shoulda been? :-(
No, too much syntactic departure from the C tradition. C# was supposed to be a better Java. It is in many respects, but Virgil is better than both.
I always thought Nemerle was what C# should have been.
Not too mention the quality of engineering of the Nemerle compiler meets or exceeds that of any other Microsoft compiler; they found tons of bugs in MSIL and .NET's CLR generics implementation. This is likely due to the fact that Nemerle is actually a very small core, and much of the language functionality is actually done in macros as part of the standard library. For example, C# added async/await as language features, whereas Nemerle simply wrote some macros.
That might have been true of early Nemerle when they still primarily used the C syntax and before they introduced the powerful macro facilities, but it's grown into too radical a change from the existing C/Java tradition. Been awhile since I looked at it too closely though. Perhaps it's time for another look!
People are way too enamored with LINQ, but people have to do all kinds of stuff to actually get any "call by intention" to work correctly. For example, Linqkit and calling AsExpandable() and Invoke() and Expand(). A proper DSL crafted solely for the context of SQL Generation would hide these abstractions for the user. C# designers did not think of that, and as a result we have an ugly DSL. Anybody who tells you LINQ is nice is an idiot who is wow'ed by a simple demo.
Nemerle is better than Virgil, although F# with full code quotations is much nicer even yet, because it hides all the plumbing Nemerle exposes as macros.
Also, not sure when they introduced macros, but Nemerle had macros and fairly expressive generics in 2008.
To your point about casting values, I am surprised you did not mention infoof, the property equivalent to typeof. In either event, infoof is trivial in Nemerle, it's simply a macro.
The other, broader issue, is that MSIL was never designed for these sorts of embedded languages, that's why we even have expression trees in the first place, because there is no duality between the language we program and and the byte code we execute, so we need to manually quote our code. F# manages to hide a remarkable amount of this mess, but they do so by creating a peer API Expr and map Expr to Expression Trees, so that they can get the advantage of expressing ASTs as discriminated unions. But, of course, F# could be better here, since it doesn't have an elegant solution for the expression problem.
At the same time, MSIL gets further hacked up. Rather than genuinely supporting infoof, we get hacks like CallerMemberName. Who thinks up dumb stuff like that? Same people who thought the DLR was elegant, I assume. Meanwhile, .NET framework guys do everything they can to avoid putting in runtime features to improve the experience for F# users, such as CallerMemberName rather than improving the allocator for functional languages.
People are way too enamored with LINQ [...] C# designers did not think of that, and as a result we have an ugly DSL.
It depends what you mean by LINQ. It's a fairly overloaded term now, but the IEnumerable LINQ extensions are quite good. The integration of quasi-quotation in the form of LINQ expressions are also nice. The query syntax was unnecessary, although I'll admit it's been convenient to have in a few cases. Complex grouping queries are much simpler to read in the flat query syntax than in the method-chained syntax.
Certainly C# has annoying corner cases even in LINQ, but I don't think you can deny that it's miles better than what's available in other major C tradition languages.
As for how Nemerle compares against Virgil, certainly Nemerle is more featureful, but it's not easy to see what the core Nemerle language really is due to the macro magic. Virgil's language is relatively simple and expressive given the description in this paper. In perusing Nemerle's standard library, it doesn't even look like polymorphic Nemerle classes compile to generic CLR classes, ie. all collections under Nemerle.Collections accept and return System.Object. That's certainly not what I'd expect a serious CLR language to generate.
I think LINQ is a cool idea, but it always bothered me that to make the implementation sufficiently extensible and flexible enough to support SQL backends, expression trees were necessary. This adds a whole meta-language to the language that wasn't there before. Perhaps C# would have been cleaner if it did have a meta-language from the beginning, but bolting one on the side always seems like a bit of a hack. From what I understand of the common backends for LINQ, I don't think they they make that much use of the expression trees in any case, so it seems like a lot of complexity pushing in the wrong direction.
Relational and mainstream languages are headed for a collision in the coming years. Too much real-world work deals with reading, writing, querying, copying, and otherwise tending to persistent data stores. If the right concurrency models can be made to fit, and the right consistency dimensions carefully added to the language, it will make a huge difference in productivity for the average programmer stuck with some SQL or NoSQL datastore that they are currently clumsily hacking on in the language du jour. This is coming from my experience working with object/relational mappings in Java with both a SQL db and a No-SQLish db.
I think LINQ is a cool idea, but it always bothered me that to make the implementation sufficiently extensible and flexible enough to support SQL backends, expression trees were necessary.
I agree. They should have just abstracted values via the finally tagless pattern, then provided an interpreter for so-called "LINQ to objects/XML", and a translator for "LINQ to SQL". Unfortunately, due to the absence of higher kinds, this pattern is clumsy at best in C#.
Relational and mainstream languages are headed for a collision in the coming years. Too much real-world work deals with reading, writing, querying, copying, and otherwise tending to persistent data stores.
I'm struggling with this now in fact. You really want a uniform API for orthogonal persistence that supports direct object access and iteration, but none of the options seem viable.
Querying, filtering, ordering is nice in LINQ to objects, but it lacks an API for updating and deletion. You could just use raw collections, but this doesn't really allow for efficient lazy loading from the persistent store.
Supporting an expression tree API like LINQ to SQL is possible, but far less efficient when the objects happen to be in memory since you have to interpret or compile the expression..
The paper you link to was published when LINQ already had community tech previews.
My rant is more about false promises, rather than what they should have done. Don't promise what you can't deliver. Consider that LINQ was meant to unify many stories, such as those that were covered by DataTable. This objective clearly failed, as there is no way to directly represent PIVOT and UNPIVOT in LINQ, you have to resort to a GroupBy encoding and you have to know the targets (for PIVOT and/or UNPIVOT) ahead of time. (Granted, with enough reflection, you could do this, but it completely defeats the objectives of writing LINQ rather than generating it.) DataTable did not have this limitation.
tl;dr: Embedded reporting sucks, and computer scientists don't know what to do about it..
I have no idea why these implementation details matter. A data structure that exposes a LINQ API? I tried reading this several times, and am sure I have no idea what you are suggesting. Please break it down for me.
I mean simply what Ben said, re: programs need to query, filter and update data, and LINQ is a decent API for doing so. However, no existing data structure available for .NET supports the necessary semantics for all such programs, ie. an object may contain an enumerable set of some sort, but this set can't scale to arbitrary sizes without exhausting main memory, or be queried efficiently at such sizes using the IEnumerable LINQ API.
To resolve this, developers must connect to a specialized storage manager which introduces an explicit partition, like LINQ to SQL did with EntitySet and EntityRef, thus complicating reasoning about such programs.
Instead, why not take the object database concept to its logical extreme and simply make such sets a cache-oblivious search tree of some sort, thus supporting arbitrary sizes, a structure that supports constructing relatively efficient queries, and asymptotically optimal block transfers for data being actively accessed, all without leaving a language's object model. If you're familiar with Waterken, basically the approach taken in it's object store, but add efficient querying via a LINQ API.
I agree that big data makes reasoning about bulk updates difficult. I am doubtful "simply make such sets a cache-oblivious search tree of some sort" is the answer for all applications.
I am not familiar with Waterken in detail, only at a superficially high level. Perhaps my only excuse for doubting you is ignorance. I will check it out, thanks.
Lambdas soon, hopefully! Halfway there, and not mentioned in the paper, Virgil also has partial application with a syntax similar to Scala:
var f = o.m(x, _, z);
which is covered in the tutorial. I also have plans to beef up pattern matching both for type queries and proper sum types. I think these will make some pretty nice upgrades to the core without upsetting the balance of other features.
Virgil also has partial application with a syntax similar to Scala
I'm not sure I like this approach. I find reading Scala difficult because of this sometimes. I generally think lambda syntax should be so concise as to not need such a shorthand.
How is this resolved:
var f = o.m(x, _, z)(_, 3);
Is f a single-arg function itself performing a partial application that returns a single-arg function, or is f a two-arg function?
If we assume
class C {
def m(x: int, y: int, z: int) -> int;
}
var o = C.new();
Then:
var f = o.m(x, _, z)(_, 3);
t1 = o.m // a closure of type (int, int, int) -> int, with o bound as the receiver
t2 = t1(x, _, z) // a closure of type int -> int, with x and z bound as paramters 1 and 3
t2(_, 3) // type error because t2 is of type int -> int
I think the nice way to make this feature work well is to have an explicit marker of where the lambda-abstraction floats to. When I implemented it as a syntax extension for OCaml, I used \( ... ) : parentheses with a backslash to indicate the abstraction site. There is then no question of what m.f(x, _) means as opposed to list.map(x + _): you use either \m.f(x, _) or list.map \(x + _).
In Virgil it is always clear where the abstraction occurs, because _ can only be used in place of an argument to an actual application, never to an infix operator or other expression. To do list.map(x + _) you would write list.map(int.+(x, _)). Anything more complicated would need a lambda.
_
list.map(x + _)
list.map(int.+(x, _))
t2(_, 3) // type error because t2 is of type int -> int
I should have been more clear. I hoped my description would convey that o.m returns a two-parameter first-class function, so you have two partial applications, and I wanted to know whether the expression I provided abstracts both calls at once, thus yielding a two-parameter tupled lambda, or only one at a time, thus yielding a two-parameter curried lambda. The former seems more in line with Virgil's philosophy, but I suspect the latter. This ambiguity is why I don't like _ on its own.
gasche described one possible way to disambiguate, although I still prefer being even slightly more explicit with lambdas. C#'s lambda syntax is sufficiently concise that I very rarely curse it. When I do, it's either some limitation with type inference, or I'm simply cursing all the N-ary overloads I have to generate, a problem Virgil thankfully avoids.
Ah. I see your question now, and I hope my example makes the rules clear. I didn't design for the case you describe because I think that reasoning about expressions should be local, regardless of partial applications. Thus we can have a simple rule:
For any application of m: (A, B, C) -> D
m: (A, B, C) -> D
m(x, y, z)
A partial application:
m(x, _, z)
has type B -> D
B -> D
We can then proceded from the left to right evaluation rules to understand more complex expressions. Also note that more than one _ is allowed in an application.
For any use case more complicated than that, I expect that lambda should be used instead.
[dup]
The code example that matches argument against several types using type queries does not look nice:
def print1(fmt: string, a: T) {
if (int.?(a)) printInt(fmt, int.!(a));
if (bool.?(a)) printBool(fmt, bool.!(a));
if (string.?(a)) printString(fmt, string.!(a));
if (byte.?(a)) printByte(fmt, byte.!(a));
}
This is very similar to a dreadful pattern one often see in Java code with repetitive instanceof checks and casts. Virgil's shorter syntax helps, but not much.
A type switch or selector could make it more readable like with a hypothetical:
def print1(fmt: string, a: T) {
type? a {
int => printInt(fmt, a);
bool => printBool(fmt, a);
string => printString(fmt, a);
byte => printByte(fmt, a);
}
}
where the variable a gains the corresponding type within the scope of the type match. If there is no match, a syntax or runtime error is a result.
a
Then the original type selector and cast operators T.?(a), T.!(a) become merely a syntax shugar tor type? a { T => true; _ => false; } and type? a { T => a; }
T.?(a), T.!(a)
type? a { T => true; _ => false; }
type? a { T => a; }
I use a similar type dispatch for printing. Each type is a subclass of 'langObject' which has a function named stringrep that returns the string representation of the object. Each class provides its own implementation of stringrep, which print calls directly whenever it is called in a context where there is certain knowledge of the subtype. Otherwise the stringrep function of the parent class is called. It does dynamic type dispatch, which involves querying the object, and calling the subclass's stringval function (downcast to a subtype pointer) from a switch statement.
This means the ugliness of type dispatch is confined to the base class's implementation, allowing me to avoid it whenever type analysis succeeds.
Further, although this isn't portable and doesn't really do anything to fix semantic ugliness, I'm working with a compiler that, if you jump through a hoop or two, guarantees O(1) switch statements. If a switch statement accounts for all cases within some range of a scalar type, and the cases (typetags) are arranged in a monotonic order, it will implement the switch as a jump table if that's faster. The code is portable because it uses only standard constructs, and the optimization is guaranteed only by the implementation, not by the language standard. Nevertheless, as long as I pay attention to this 'extra-semantic' requirement and use the same dev environment, the type dispatch time on native types is strictly limited to a constant or less, which makes me feel better about the downcasting ugliness.
If I want to extend this to product types (such as function types) and other user-defined types, I think I see either a staged build or a hash table of function pointers in the offing. Those are also pretty ugly, but the latter at least confines the ugliness pretty well to a single piece of the implementation. | http://lambda-the-ultimate.org/node/4716 | CC-MAIN-2019-13 | refinedweb | 6,132 | 61.26 |
So I'm reading in user input that has the form 1U = 1.2A the only variation is that A will be a different letter from B to E. I will be receiving 7 such inputs. One for each letter A through E. I need to extract the floating point number next to the A. So in the previous example I trying to get 1.2. I do this by putting the user input into an istringstream variable and extrafting what I need from there
#include <iostream> #include <sstream> using namespace std; string rateInfo; istringstream inSS; double rate[7], temp; for (int i = 0; i < 7; i++) { getline(cin, rateInfo); inSS.clear(); inSS.str(rateInfo); inSS.ignore(5); inSS >> temp; rate[i] = temp; }
This code actually mostly works. Except for when i = 4. The specific input I'm testing when i = 4 is 1U = 5.6E. Instead of reading in 5.6 it reads in 0.
I also discovered if input 1U = 5.6 instead of 1U = 5.6E my code works perfectly. I've also tested different numbers along with E such as 6.54 and 1.23 and still everything works except for the E case. So it seems something about the letter E is breaking my code.
I know that the extraction operator reading in 0 is one of the things that can happen when the stream is in an error state but for the life of me I can't figure out exactly what is wrong. Especially since it works for every letter except E. | https://techqa.club/v/q/extraction-operator-reads-in-0-incorrectly-but-reads-correct-number-in-similar-cases-c3RhY2tvdmVyZmxvd3w1NTY2OTA0OA== | CC-MAIN-2021-39 | refinedweb | 259 | 76.82 |
14 September 2010 07:54 [Source: ICIS news]
MUMBAI (ICIS)--?xml:namespace>
In addition to the change in feedstock, MCFL also plans to expand the urea and ammonia capacities at the facility once it receives the required approvals, the source at the Ministry of Environment and Forests said, but did not provide further details.
The project would require an investment of nearly Indian rupees (Rs) 4bn ($86.56m), the source said adding that changing the feedstock to gas would be cheaper and cost effective for the company.
The company had submitted its proposal to the ministry and was awaiting environmental clearance, the source said.
MFCL has the capacity to produce 217,800 tonnes/year of ammonia, 379,500 tonnes/year of urea and 255,500 tonnes of phosphatic fertilizers at the facility, according to the company website.
Mangalore Chemicals and Fertilizers Ltd is a part of the UB Group, a large and diversified business house.
($1 = Rs46 | http://www.icis.com/Articles/2010/09/14/9392991/indias-mfcl-plans-feedstock-change-at-mangalore-ferts.html | CC-MAIN-2014-52 | refinedweb | 157 | 50.06 |
Ok Guys
First, let me state that am a livid C and assembly programmer. A few months ago, I embarked on my journey to learn C++. In my quest the following are the conclusions I have
come to. C++ is everything I have heard it to be, the benefits that C++ offers over C far outways it faults (speed and exe size), and I truly like the new and delete constructs over malloc and free functions. IMHO, C++ is a great language but a better C, maybe.
OOP surely surpasses the procedural method of programming, so their was a reason for the progression
of C to C++, but did it go to far. I mean, come on give me a brake as programmers we should program! Operator overloading, function overloading, inline functions, Reference, Execptions and the jury is still out on namespaces, I consider these trinkets! Hell, even in C their are some constructs that I dont bother to use such as unions and enums, programmer
preference. Think, before we had any of these tools programmers coded as needed.
My style of programming follows the old addage, KISS - Keep It Simple Stupid. I program in C++, my methology
is Object oriented if need be, but I think as a c programmer. I am just wondering as programmers do we use enough common sense i.e., programmer preference or are we lead into excepting verbose tools. Finally, I know that I have not spent enough time in my transition to C++ so I leave that to your responses. | http://cboard.cprogramming.com/brief-history-cprogramming-com/10094-respond.html | CC-MAIN-2015-06 | refinedweb | 257 | 71.04 |
N
- name resolution
The process of translating a name into some object or information that the name represents.
- namespace
A logical, hierarchical naming scheme for grouping related types. For example, DNS, NetBIOS, and LDAP.
- naming context
A contiguous Active Directory® subtree that is replicated on one or more domain controllers in a forest. Also known as a directory partition.
- native mode
A domain where native mode has been enabled using the domain property page in the Active Directory Users and Computers MMC snap-in. Also see mixed mode.
Domains must be operating in native mode for nested groups to be supported.
- NC
See naming context.
- NDS
NetWare Directory Services. | https://msdn.microsoft.com/en-us/library/windows/desktop/ms681918(v=vs.85).aspx | CC-MAIN-2017-34 | refinedweb | 109 | 52.05 |
In order to learn about RxJava, we will go through the example contained in the Chapter09/rx2java/customer-service folder in this book's GitHub repository.
The first thing you should be aware of is that, in order to use RxJava with Quarkus, you have to add an instance of Vertx, which can be found under the io.vertx.reativex.core namespace:
@Inject io.vertx.reactivex.core.Vertx vertx;
That being said, one of the main advantages of including ReactiveX in our project is that it will greatly enhance the capability of transforming data that flows between the observable and the subscriber.
For example, let's take a look at the following use case:
- We want to produce a file with a list of customers to be imported in a ... | https://www.oreilly.com/library/view/hands-on-cloud-native-applications/9781838821470/c6acd541-2f98-45f1-a3be-e608fabd8f41.xhtml | CC-MAIN-2022-27 | refinedweb | 130 | 55.98 |
Episode 126: Font Families, Hamburger Menus, Flux11:23 0:02 internets where we talk about all things web design, web development, and more. 0:04 >> In this episode we'll be talking about font families, the hamburger menu, 0:08 Flux, and more. 0:12 >> Let's check it out. 0:14 [MUSIC] 0:15 First up is this really cool site called font family reunion. 0:20 Now, whenever you. 0:26 >> Get it? 0:27 Family Reunion? 0:28 >> But font family, which is a CSS property. 0:29 >> Yeah, like if it were Wheel of Fortune, that would be like the before and after. 0:31 >> It's like a play on words. 0:35 I wish there was a name for that. 0:38 >> Yeah, I don't know. 0:40 This is font family reunion. 0:42 It says compatibility tables for default local fonts, 0:44 so, basically what this tells you is, if you're using the font family property, 0:50 and you just give it, well, in this case, nothing. 0:56 This is what's going to happen. 1:00 It's going to use the Operating System default font and that, 1:02 in this case on all these different variations on Mac OSX, Windows, 1:08 iOS, Android and so on it's using the Times or Times New Roman font. 1:15 And actually, excuse me, on Android it's going to use Droid Sans. 1:23 On Windows Phone it will use Se, Segoe. 1:27 >> Segwa, Segway. 1:33 >> Let's just, let's segway right out of that one. 1:35 And that, that's our OS default. 1:37 But if we actually type in something like, 1:41 say, Helvetica, which clearly I've already done here. 1:43 And click show, these are the fonts that will be used. 1:47 Now in most cases since Helvetica is 1:52 a pretty standard system font across the board, it's supported most places. 1:56 It will actually render Helvetica just like you'd expect. 2:03 So on OS ten it will render Helvetica. 2:06 Here on Windows, 2:09 it's actually going to switch over to Arial because Helvetica is not installed, 2:11 but it, it at least uses a Sans Serif font instead of using Times New Roman. 2:17 Again, on iOS it's going to use Helvetica. 2:24 Android's just like, I don't know. 2:26 I just, I love Droid Sans so much. 2:28 I'm just gonna use that. 2:30 And then, once again, we get sigwa, Segue. 2:31 [CROSSTALK]. >> Segoe. 2:35 >> On. >> Seego. 2:38 >> Windows Phone. 2:39 And, yeah. 2:41 Any who, really cool site. 2:42 You can type in any font here and figure our whether or 2:43 not that's going to be well supported. 2:48 On different operating systems. 2:50 >> Yeah, very nice. 2:52 Next up, we have a very, very thorough post called all this, 2:54 which goes on to tell what the value of this will be 2:59 in different contexts in your JavaScript applications. 3:04 Now it starts with the most simple version, the global this. 3:09 In a browser, this is the window object. 3:13 And right here they have a script that says, hey log to the console whether or 3:17 not this is equal to the window and that returns true. 3:21 If you are going to test for more equality here, 3:26 we have this variable Foo which is going to be defined in the global context. 3:30 This.foo will equal foo, and so will window.foo. 3:36 Now, if you create a new variable without using the var or 3:41 let keyword in ECMAScript 6, you're adding or changing a property on the global this. 3:45 So, here, we are using another variable called foo. 3:51 It's set equal to the word, bar. 3:55 We redefine that in a function. 3:58 And as we expect, when we run this, and 4:00 then run the function, it changes bar to foo. 4:03 Now if you are in node using the repl, this is the top namespace. 4:07 And you can refer to it as global. 4:14 And it does exactly what you would expect. 4:16 But, JavaScript is a language with many. 4:19 Possible different scopes. 4:22 Inside of functions, this can be different, 4:24 it can refer to a function or global. 4:27 And it can have different meanings depending on whether or 4:30 not, you're using the strict version of JavaScript. 4:32 And you can also get type errors by trying to set that inside of a global function. 4:36 Now you might think that's it, but no there are even more 4:42 possible definitions and scopes of this, and there are so many in fact 4:45 that I'm going to allow you to read this for yourself because it is so nuanced. 4:50 There is actually a ton to know. 4:56 And you can get yourself into trouble if you don't know exactly what this is, 4:58 because you may be setting different variables and different scopes. 5:04 So, definitely check this post out. 5:07 It will be in the show notes, which you can see right below this video. 5:08 >> I get it. 5:11 Check this. 5:12 Post out. 5:14 >> Yep. 5:15 >> See what you did there. 5:15 Next up is a wonderful article called Testing The Hamburger Icon for 5:17 More Revenue. 5:22 Now we've talked about the hamburger icon many, many times in the past. 5:23 This is the three bar icon that you see on lots of web sites, 5:28 that usually represents an icon for. 5:33 A menu, in fact it's so 5:36 enlarged here I wasn't even really sure what I was supposed to be looking at. 5:38 And maybe that is in fact part of the problem with the hamburger menu. 5:44 It's not necessarily clear that it's a menu. 5:49 Now, I really like this blog post because a lot of mobile or 5:52 where, really any kind of test results will focus on. 5:57 Things that engagement or page hits or whatever. 6:01 This was a really, literally dollars and cents test. 6:05 It, it figured out, does a different type of menu icon making more money. 6:12 And it turns out. 6:17 The answer is yes. 6:18 There there's has been a couple different test that were done here, 6:21 and this ended up being the winner. 6:24 So they had a three lined menu here and 6:27 they also had the word menu right underneath there. 6:30 And like I said. 6:36 All four treatments brought in more revenue than the control, 6:39 just the normal free line hamburger menu. 6:43 And they say not, just clicks Engagement or other soft metrics, dollars. 6:46 That was really pretty cool. 6:53 So the lesson here is, 6:55 is that the hamburger menu might not be so money after all. 6:56 I bet they're pretty full after all that hamburger menu testing. 7:02 >> Next up, we have an article explaining 7:06 the Flux Application Architecture is something that Facebook. 7:10 Has recently put out. 7:17 And there's even libraries and examples to work with flux. 7:18 Now, this whole article walks through understanding flux, 7:24 which can be pretty complicated. 7:28 Now, here we have a to do component, and a to do store. 7:31 This is going to be a very very basic stripped down version, 7:37 of a Flux application. 7:41 So, what's going to happen is this to do store is going to store the different 7:43 to do items, and then the do f component will render them. 7:48 So, what happens when you create a new to do item? 7:54 Well the user will enter that and 7:58 then something called the to do action creators will create it. 7:59 Fire this action that says, hey, this has been created. 8:04 And then something called the Dispatcher will figure out 8:08 what to do with that action. 8:11 Finally, the Dispatcher will call the callback of ToDo Store, 8:15 send that to the ToDo Store. 8:19 Which waits for and emits a change event, 8:21 sends that back to the to do app component which will potentially re render it and 8:24 then this whole thing can happen very many times. 8:29 Now, this critical walks through and shows you what happens, 8:33 at each of these different points in the application with code. 8:36 You can, of course, download the entire app, application example. 8:40 But what's great about this, is it shows you where exactly everything is happening. 8:45 And it gives you the snippets from the different parts of the example. 8:50 Along with commentary on what happens. 8:55 Now, I'm not gonna go through and read everything here. 8:57 But if you've been struggling to understand the flux architecture. 8:59 Definitely check this out. 9:03 Now, something else that's important to remember about Flux is, 9:04 it is different from the model view controller architecture in JavaScript. 9:07 It's a completely different paradigm of thinking that involves one-way data flow. 9:11 >> Also, completely different architecture than. 9:16 What was featured in Back to the Future. 9:19 >> Right. That would be the flux capacitor. 9:21 >> That's wha- 9:23 >> Which interfaces with the time circuits. 9:24 >> That's what I thought this article was gonna be about. 9:26 Very >> Wonder what's gonna happen when 9:28 this website hits 88 miles per hour? 9:30 >> Very disappointing it wasn't about time travel. 9:33 Next up is a ux project checklist. 9:37 This is a wonderful checklist that is well about ux. 9:41 And it's broken down into research, planning, exploration, communication and 9:46 it's a lot of stuff that you want to make sure that you're doing. 9:52 Kinda as you move through these different phases of your project. 9:55 And the nice thing is that they have links for everyone of these that go to 9:59 different resources that sort of describe what each one of these aspects is. 10:05 Now, research planning, exploration communication that's all. 10:11 Kinda boring stuff. 10:15 There we go, creation. 10:17 Let's just get right into it, not do any kind of research. 10:18 UI elements, we've got those. 10:22 We got some some gestures, responsive. 10:24 All right. 10:27 >> Good. 10:28 >> I think the the website's all done. 10:28 I don't wanna hear any feedback about it. 10:31 >> No. >> Just kinda- 10:33 >> You don't need to. 10:33 >> Finalize stuff, and yeah, I think, I think that's it. 10:34 Testing, [SOUND] I'm not going to do that. 10:38 >> No [LAUGH]. 10:40 But anyway. >> Waste of time. 10:40 >> Really cool stuff. 10:42 Definitely be sure to use this maybe on your next project and kinda look 10:43 through each step and kinda think about whether or not, you want to do these. 10:50 And as you go through, we can check them off. 10:55 >> Yeah, don't launch your website without each one of these being checked. 10:56 >> Exactly. You've got to do 10:59 every single one of them, maybe. 11:00 >> Yeah, whatever. 11:03 >> Yeah. 11:05 That's all we have time for this week. 11:05 I'm @nickrp on Twitter. 11:07 >> And I am @jseifer. 11:08 For more information on anything we talked about, 11:09 check out the show notes right below this video. 11:11 Thank you everybody for watching, and we will see you next week. 11:13 | https://teamtreehouse.com/library/episode-126-font-families-hamburger-menus-flux?t=668 | CC-MAIN-2021-49 | refinedweb | 2,143 | 83.86 |
Hide Forgot
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Win 9x 4.90; en-US; rv:1.6)
Gecko/20040113
Description of problem:
Programs which use popt leak memory in "expandNextArg" when parsing
int options, after "poptFreeContext" is invoked. Given the following
short program:
#include <popt.h> // For Parsing command line options
int main(int argc, char *argv[] )
{
int value = 0;
int intOptions = (POPT_ARG_INT | POPT_ARGFLAG_ONEDASH);
poptContext optCon; // Context for parsing command-line options
struct poptOption optionsTable[] = {
POPT_AUTOHELP
{ "value", 'v', intOptions, &value, 0, "Enter an int", 0 },
{ NULL, 0, 0, NULL, 0 }
};
optCon = poptGetContext(NULL, argc, (const char**) argv,
optionsTable, 0);
poptSetOtherOptionHelp(optCon, "Shop Help\n");
if (argc < 2) {
poptPrintUsage(optCon, stderr, 0);
return 1;
}
int c;
while ((c = poptGetNextOpt(optCon))>=0) {
}
poptFreeContext(optCon);
return 0;
}
Then use valgrind () to check for memory leaks
in the above (named test):
)
...
If I rerun it with a longer int:
>valgrind --leak-check=yes test -v=12
It reports 3 bytes lost. Similarly:
>valgrind --leak-check=yes test -v=123
reports 4 bytes lost.
Probably not too serious for normal users using popt once to parse
command line arguments. Only possibly serious if someone was using
popt in a strange way, parsing things repeatedly...
Version-Release number of selected component (if applicable):
popt-1.7-1.06
How reproducible:
Always
Steps to Reproduce:
1.Take any popt program with int arguments which exits cleanly (see
simple example above)
2.Run it with valgrind to check for leaks
3.valgrind will report leaks the length of the argument supplied (+1)
Actual Results: )
...
Expected Results: No memory leaks expected
Additional info:
Standard RH8 on Dell i686 desktop.
All arguments returned to the caller are malloc'd.
It's up to the caller, not popt, to free.
From the man page, it sounded like the application needs to call free
for poptParseArgvString and poptDupArgv, but I don't see how that's
the case for poptGetNextOpt. It's doesn't return any char pointers,
and there are no extra parameters in this example to free.
Any chance you could point out where I'm supposed to free something in
the above example? Or, for that matter in the example in the man page
which has the exact same memory leak if you use the option "--bps=1"?
I grabbed the source for popt (1.7), and I'm now pretty confident that
this is a real bug. It affects both string and int parms. Even if
you free the string return, there is an additional string that isn't
deallocated.
Valgrind identified the realloc right at the end of "expandNextArg" as
the source of the leak, and that line is clearly commented:
t = realloc(t, strlen(t) + 1); /* XXX memory leak, hard to plug */
I'm not sure I understand this code well enough to be sure of how to
fix this, but playing around with it a bit, the pointer to it gets
null'd without being released in "poptResetContext", in this loop:
while (con->os > con->optionStack) {
cleanOSE(con->os--);
}
Notice that this look does a "cleanOSE" on every "con-os", except for
the last one (where con->os == con->optionStack). The change that
fixes this for me is adding a call to _free right after the loop:
while (con->os > con->optionStack) {
cleanOSE(con->os--);
}
_free(con->os->next.
Yes, this bug still exists in current Fedora Core and RHEL releases. I've
modified the "product" to be RHEL 4.1, which is what I'm currently running. The
simplest way to verify this bug is to just use "rpm", which also has this bug,
because it uses popt to parse it's parameters. So even a command as simple as
"rpm --help" has a small memory leak, which can be seen using valgrind:
$ valgrind --tool=memcheck --leak-check=yes rpm --help
...
==5326== LEAK SUMMARY:
==5326== definitely lost: 47 bytes in 1 blocks.
==5326== possibly lost: 0 bytes in 0 blocks.
==5326== still reachable: 9924 bytes in 141 blocks.
==5326== suppressed: 200 bytes in 1 blocks.
The 1 block leak in popthelp.c has been fixed in rpm/popt cvs, will be in popt-1.10.8-0.11 when built.
Meanwhile, there are other 1 time malloc's of data that cannot be free'd without changing popt's ABI.
UPSTREAM or WONTFIX, your call.
This problem is IMHO solved in Rawhide by popt-1.12-3. If not, please open a
separate bug for Fedora to get this split of from R. | https://bugzilla.redhat.com/show_bug.cgi?id=119782 | CC-MAIN-2019-18 | refinedweb | 753 | 64.41 |
We are given a list of words that have both 'simple' and 'compound' words in them. Write an algorithm that prints out a list of words without the compound words that are made up of the simple words.
Input: chat, ever, snapchat, snap, salesperson, per, person, sales, son, whatsoever, what so.
Output should be: chat, ever, snap, per, sales, son, what, so
@tfxcrunner88823 I have created a new subcategory and moved this question to the Snapchat subcategory. Thanks!
The idea . First we have to sort words by length. Then generate trie and put the words in it. Check in the trie if some word is compound.
@tfxcrunner88823 you can use set
def compositeWord(word,s): if not word: return True for prefix in (word[:i] for i in range(1, len(word) + 1)): if prefix in s: if compositeWord(word[len(prefix):], s) == True: return True s.add(word) return False def simpleWords(words): s = set() return [w for w in words if not compositeWord(w,s)] if __name__ == '__main__': print(simpleWords(sorted(["chat","ever","snapchat","snap","salesperson","per","person","sales","son","whatsoever","what","so"],key=lambda s: len(s))))
@elmirap wow great! I'm more of C guy, so I'll try to convert this. Thanks! If anyone else has code or solution ideas they want to post, that be awesome.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/51124/snapchat-word-finder | CC-MAIN-2017-34 | refinedweb | 238 | 62.48 |
There:
#1 by yohami - January 2nd, 2008 at 05:36
Hi Zeh. How does this save time? all the methods in the interface have to be defined again on each class, right?
#2 by dgelineau - January 3rd, 2008 at 01:47
You use interfaces and polymorphism when you want to code in a way that is more maintainable. If the flash compiler is set to soft (it won’t complain for type error) then you don’t have to bother with that.
But if your code is going to be used for more than one project or by more than one person, I would advise using this kind of technique. Having your compiler in strict mode is really good for catching error before they arise.
So in my case, I wanted to check if I could shoot this monster or hurt it. I had to tell the compiler which monster it was in order to use the getShoot method. In a complex game, it could have been one type of monster from a dozen type and I would have to check the current monster against each type. Instead I make sure that every monster implements the IMonster interface and because of that the compiler will throw me an error if I don’t code a getShoot and canGetShoot method with the same definition as in the interface (same parameters and return type).
When I will shoot a monster, I only have to cast any monster to the IMonster type and I am sure it will have the previously mentionned methods. In time it will save me trouble, which might not be obvious at first.
#3 by yohami - January 3rd, 2008 at 16:26
Hi dgelineau,
Thanks. I get it. I thought the interface methods would be inherited or utilized, but they are just a template to make sure a certain class meets the standards. I will probably use it too.
Cheers,
Yohami
#4 by Will - December 7th, 2008 at 15:57
Are you sure this is correct?
I’m pretty sure you can’t apply an interface to display objects as you will get a type error when you add it to the display list. Take the following for example:
var myMonster:IMonster = new spriteMouton();
addChild(myMonster);
This will throw a type coercion error because as far as IMonster is concerned it know’s nothing about inheriting from a DisplayObject.
#5 by Dan - December 14th, 2008 at 23:54
Hey Will, you can cast your object to DisplayObject to add it.
var myMonster:IMonster = new spriteMouton();
addChild( DisplayObject( myMonster ) );
#6 by g - December 25th, 2008 at 10:22
thats pretty good explanation in the comment, dgelineau.
#7 by David - February 17th, 2009 at 15:11
Regarding casting a spriteMouton to Displayobject.
The IMonster interface could be extended to the interface of the DisplayObject and thereby ensuring that classes implementing the IMonster always must extend DisplayObject hence no need to type cast. Just a thought.
#8 by zedia.net - February 17th, 2009 at 16:56
@David
In ActionScript, an interface cannot extend another class, it can only implement another interface, so we cannot do what you are saying, but it would be nice to do so.
#9 by Paul - February 22nd, 2009 at 01:43
Shouldn’t :
be:
As mentioned above the Interface is not a Class and therefore will throw type errors because it is not a type. If it is declared correctly there should be no need for coercion / casting.
#10 by zedia.net - February 23rd, 2009 at 15:58
@Paul
Yes and No, if what you are looking for is polymorphism then you probably want to cast your object to an interface.
Now you could just do what you say and later push myMonster into an array and when retrieving it, cast it to the interface and that would work just as good. It all depends of the context of your application.
#11 by steve76 - March 18th, 2009 at 12:02
@post 10
I think that You never will do a cast in the Paul code.
You can use myMonster as MovieClip or implementation of a IMonster interface without casting in every method that receive as parameters or an MovieClip or an IMonster.
I’m a newbee from the point of view of AS3, but I work with PHP5, C++, Java and so on and that is the main rule ( I suppose so ). am I in fault for AS3?
Bye,Ste
#12 by red tuttle - March 28th, 2009 at 15:01
Is there an easy way to check if a particular object implements an interface?
I have a collection of objects of type IFoo, of which any of them may or may not implement a certain interface, maybe IBar or whatever.
in C# I would do:
if (foo.type is IBar)
But is is not a keyword in CS3
#13 by zedia.net - March 28th, 2009 at 15:50
You can in Actionscript 3 use the is keyword like this:
if (entityList[i] is IBar) {
I have used it and it works perfectly.
#14 by John Giotta - April 16th, 2009 at 09:43
Interace is useful when dealing with shared asset libraries as well. You may refer to the interface class in the application for general definitions of what implementing class will define. Much like intrinsic did for AS2.
#15 by ad - April 26th, 2009 at 16:34
Just tried this for the first time and found no need for casting.
#16 by Fraanske - July 14th, 2009 at 14:53
if (monsterInstance as IMonster)
{
// monsterInstance implements IMonster
} else {
// monsterInstance does not implement IMonster
}
trace(monsterInstance as IMonster)
// Traces either a monsterInstance or null
This is the simplest Interface check I could come up with.
#17 by Joshua - August 19th, 2009 at 18:12
You can also use the “as” parameter.
var newMonster:IMonster = new ScaryMonster ();
addChild (newMonster as DisplayObject);
#18 by brecht - August 28th, 2009 at 06:38
this doesn’t work because you can’t call the displayObject specific parameters like x, y, addEventListener and so on. There has to be another way to make movieclips subject to an interface without losing it’s displayObject specific methods and params. Anyone?
#19 by brecht - August 28th, 2009 at 06:42
Well i guess you can just cast to DisplayObject everytime you need a property but still…
#20 by Weeble - December 15th, 2009 at 10:31
very usefull explanation, I’ve struggled to get my head round this for a while now and you helped clear it up. Thanks!
#21 by ayu - March 27th, 2010 at 18:34
hi, how do we have public variables in interfaces?
#22 by zedia.net - March 28th, 2010 at 15:08
You don’t need to say variables and methods are public in an interface because it is implicit. An interface is a minimum list of public variables or methods that a class must have in order to implement it.
#23 by Gena - April 14th, 2010 at 15:27
I’m very confused with interfaces because i do a lot of elearning and i’ve never seen an example similar to what i do. Can you please give me an example of using interfaces in elearning CBTs? I make a lot of templates and they all have data loaded from xml and back, next, and help buttons and a review quiz at the end. Then a results screen telling you what you missed(your score) and what the right answer was and it’s printable. Thanks!
#24 by Andre - April 14th, 2010 at 23:53
An interface is an abstract class it cannot be instantiated. That means you cannot do this : im:IMonster=new IMonster();. To find an interface useful you have to understand polymorphism.A simple definition of Ploymorphism is from Generalisation to specialisation.Ex: You have fruits you want to know how many vitamins they contain and display it. Here is some code in ActionScript:
//(Generalisation)
public interface Ifruit
{
function getVitamins():string
}
//(Specialisation)
public Class Apple implements Ifruit
{
private number_of_vitamins_:string;
//some code
function getVitamins ():string
{
return number_of_vitamins;
}
};
public class FruitDisplayer extends Sprite
{
displayedtext_:Label;
//Constructor plus more code
function DisplayFruitVitamins(IFruit fruit):void
{
displayedtext_.Text=fruit.getVitamins ():
}
};
You see DisplayFruitVitamins can receive any IFruit like apple,orange banana…
P.S.(1) For the purpose of simplifying things i used string but it’s more logical to use an integer for number_of_viatmins_ and the getVitamins() function. Cast it into string before displaying.
P.S.(2) I am a C++ and C# coder, i just started with ActionScript so if there is something more suitable than a label just to display text (no user input) well be free to “tell” me thx.
#25 by Good Question - April 27th, 2010 at 07:16
I believe they were asking about using an interface to require the presence of EITHER an explicit setter/getter or an implicit setter/getter. Explicit works, but implict does not…
Unless there is some way around it that I am not aware of, interfaces prevent you from using implicit getters and setters – that’s bad!
#26 by Neus - June 2nd, 2010 at 08:29
@Good Question
If you declare your implementation variables as bindable (or the whole class), the compiler will accept…
This is because the compiler change the variable in two getter and setter, a mimick of the interface implementation…
#27 by roee - June 30th, 2010 at 04:22
thanks
#28 by Kawika - August 25th, 2010 at 23:54
This helps with learning interfaces, thanks.
#29 by Chris - August 28th, 2010 at 11:30
Hehe, it’s this short intros that saves the average AS3 starter a lot og “head against the wall/desk” time.
#30 by Quakeboy - November 24th, 2010 at 13:00
Thanks for the quick tutorial.. apt length and to the point explanations.
#31 by northmantif - December 6th, 2010 at 06:34
and what’s the difference between making an Interface, and extending class from a Fruit class i.e. like in the example above, but to pass to the:
function DisplayFruitVitamins(fruit:Fruit):void
{
displayedtext_.Text=fruit.getVitamins ():
}
DisplayFruitVitamins(banana);
??
#32 by asciiman - December 17th, 2010 at 15:42
@northmantif
An interface is a signature, while if you extend from a Fruit class you must actually implement the fruit class. How would you implement the getVitamins() function for a Fruit? You can’t, because generic fruits don’t have vitamins. A generic fruit is just a generalization, not actually real.
So interfaces are good for when you need to generalize something that is in common with multiple real things, and you want to create a binding signature for those things.
#33 by jack - January 17th, 2011 at 13:01
@ brecht
But still what?
#34 by max - January 18th, 2011 at 21:28
For those struggling about the fact that once an interface is implemented you lose for example the displayobject inheritance (because you cannot extend your interface to a class,) .. the solution is the other way around… you shouldn’t cast to a displayobject but have your object already as displayobject and then eventually cast to your interface when you need specifc stuff from there (isn’t that the meaning of an interface..
). If I have a base class Page (which may or may not implement an interface) but for sure will extend a displayobject and other subpages like Homepage, Contactpage and so on which implement an interface.. when I create an Homepage instance I create it as a Page(displayobject) so I can move it, attach listerners and add it to stage.. only when I need to call a specific method from the interface I cast it to the interface and for example call an initPage() …
#35 by Tim - June 6th, 2011 at 10:27
@ yohami
In addition, this will allow your code to be dynamic down the road. Think of an interface as a way of building functions that can be variable within a class. If you create an interface, then you can create several classes that implement that interface. In this way, those classes define a behavior (functionality) described in the interface. Then you can create an instance of these classes within your main object, and change it over time to basic render all the functions within the interface as ‘variable’ functions.
The short and easy description is that using interfaces allows you to add dynamic behavior (functionality) to your classes, as opposed to hard coding it.
#36 by steve - June 15th, 2011 at 19:17
I’m building a game with this sort of class hierarchy:
Class Entity extends Sprite
Class Player extends Entity
Class Platform extends Entity
Class Monster extends Entity
These are the main classes, but within class player and class monster I have a reference for a settings object:
Interface EntitySetting
public get…set…
public methods….
abstract methods… (implemented by specific child class)
Class MonsterSetting implements EntitySetting
Class PlayerSetting implements EntitySetting
Class PlatformSetting implements EntitySetting
So in the Specific class settings objects, each using the function headers from the EntitySetting class.
So then I can just have an array of Entity objects in my main game driver class. You can easily perform operations on all objects at once by calling them all Entities, then any specific code will filter through their Settings object. If it is Melee dps monster or a ranged caster monster, or a tall platform, short platform with spikes, w/e.
#37 by Daniel - August 3rd, 2011 at 22:59
For instance
you can use in the interface
function get displayObject():DisplayObject;
in the spriteMount class
public function get displayObject():DisplayObject
{
return this;
}
and in another class
var see:Imonster;
and then
see = new spriteMount();
and then
see.displayObject.x = 100;
see.displayObject.y = 100;
#38 by encoder - November 28th, 2011 at 08:25
@ yohami
by letting you focus on architecture and not the algorithms that actually control the “monster” for example. | http://www.zedia.net/2008/how-to-use-interface-in-actionscript-3/ | CC-MAIN-2015-48 | refinedweb | 2,317 | 57.81 |
Cost visibility is the first step to managing and forecasting cloud costs. But before you splurge on a Kubernetes cost monitoring solution, make sure that it includes these three crucial metrics.
Picking the right cost monitoring areas to focus on makes a difference in how you budget for the cloud and investigate cost spikes. Here are three metrics that help teams meet their FinOps goals.
3 cloud cost metrics every team needs to track
1. Daily cloud spend
Are your current cloud costs compatible with your budget? What is your burn rate?
To keep cloud expenses in check, you need to have all the data at hand to easily extrapolate your daily or weekly expenses into your monthly bill. The daily spend report helps to do just that.
Suppose you have a budget of $1,000 per month. You should be able to check whether you’re still running under the budget or at least in line with it. If your average daily spend is closer to $50 than c. $33 (30 days x $33 = $990), you’re likely to end up with a higher cloud bill than expected.
This is an example of the daily spend report from CAST AI, which shows all this data in one place:
Another benefit of the daily cloud costs report is that it allows you to identify outliers in your usage or spend. You can verify how much you’ve spent each day for the last two weeks and double-check that data for any outliers or cost spikes that might lead to cloud waste.
2. Cost per provisioned and requested CPU
Another good practice is tracking your cost per provisioned CPU and requested CPU. Why should you differentiate between these two reports?
By comparing the number of requested vs. provisioned CPUs, you can discover this gap and calculate how much you're actually spending per one requested CPU to make your cost reporting more accurate.
If you’re running a Kubernetes cluster that hasn’t been optimized for cost, you will see a significant difference between how much you're provisioning and how much you're actually requesting. You'll see that you're spending money on provisioned CPUs and only end up requesting only a small amount of them.
Let’s illustrate this with an example:
Your cost per provisioned CPU is $2. Due to the lack of optimization, you waste a lot of resources. As a result, your cost per requested CPU is $10. This means that you’re running your cluster for a price that is 5x higher than expected.
3. Historical cost allocation
You’re an engineering manager who got a cloud bill from the FinOps manager asking why the heck it’s so high. You went over budget like most teams that use public cloud services. But what ended up costing you more than expected?
This is where historical cost allocation makes a difference.
This report can save you hours, if not days, on investigating where the extra costs come from. By checking last month's spend dashboard, you can instantly view the cost distribution between namespaces or workloads in terms of dollar spend.
See a couple of workloads running and using a lot of money but not doing anything? These are idle workloads - the prime driver of cloud waste.
You have the solution. Now you know what to clean up and make the financial operations manager happy next month.
Access these three reports for free
The CAST AI Kubernetes cost monitoring module includes all of these three crucial reports and gives you access to heaps of historical cost data free of charge.
Connect your cluster and analyze your cloud costs in real time to never go over budget again.
CAST AI clients save an average of 63% on their Kubernetes bills
Connect your cluster and see your costs in 5 min, no credit card required.
Top comments (0) | https://dev.to/castai/kubernetes-cost-monitoring-3-metrics-you-need-to-track-asap-5c6o | CC-MAIN-2022-40 | refinedweb | 652 | 71.85 |
Created on 2018-02-24 08:58 by anthony-flury, last changed 2018-09-14 21:30 by berker.peksag. This issue is now closed.
Using the unittest.mock helper mock_open with multi-line read data, although readlines method will work on the mocked open data, the commonly used iterator idiom on an open file returns the equivalent of an empty file.
from unittest.mock import mock_open
read_data = 'line 1\nline 2\nline 3\nline 4\n'
with patch('builtins.open', mock_open) as mocked:
with open('a.txt', 'r') as fp:
assert [l for l in StringIO(read_data)] ==
[l for l in fp]
will fail although it will work on a normal file with the same data, and using [l for l in fp.readlines()] will also work.
There is a relatively simple fix which I have a working local version - but I don't know how to provide that back to the library - or even if i should.
Is this related to #33236 ?
No - it isn't related.
In the case of mock_open; it isn't intended to be a simple MagicMock - it is meant to be a mocked version of open, and so to be useful as a testing tool, it should emulate a file as much as possible.
When a mock_open is created, you can provide an argument 'read_data' which is meant to be the data from your mocked file, so it is key that the dunder iter method actually returns an iterator. The mock_open implementation already provides special versions of read, readline and readlines methods which use the 'read_data' initial value as the content.
Currently though the dunder iter method isn't set at all - so the returned value would currently be an empty iterator (which makes mock_open unable to be used to test idiomatic python :
def display(file_name):
with open('a.txt', 'r') as fp:
for line in fp:
print(line)
As a trivial example the above code when mock_open is used will be equivalent of opening an empty file, but this code :
def display(file_name):
with open('a.txt', 'r') as fp:
while True:
line = readline(fp)
if line == '':
break
print(line)
Will work correctly with the data provided to mock_open.
Regardless of how and when #33236 is solved - a fix would still be needed for mock_open to make it provide an iterator for the mocked file.
Anthony's PR is awaiting merge. Although Yury has reviewed it, as the core developers mocktest experts, it would be good if Michael and/or Robert could also take a look.
This is basically a duplicate of bpo-21258, but I haven't closely look at the patches in both issues yet.
We should probably consider adding support for __next__ as well.
But the __next__ is a method on the iterator;
So long as __iter__ returns a valid iterator (which it does in my pull request), it will by definition support __next___
Although it is entirely possible that I have misunderstood what you are saying.
New changeset 2087023fdec2c89070bd14f384a3c308c548a94a by Berker Peksag (Tony Flury) in branch 'master':
bpo-32933: Implement __iter__ method on mock_open() (GH-5974)
Thanks for the patch, Anthony.I consider this a new feature, so I removed 3.6 and 3.7 from the versions field. We can backport to 3.7 if other core developers think that it's worth to fix in the latest maintenance branch.
Berker,
Thanks for your work on getting this complete.
I would strongly support backporting if possible.
3.5 and 3.6 will be in common use for a while (afaik 3.6 has only now got delivered to Ubuntu as the default Python 3), and this does fix does allow full testing of what would be considered pythonic code.
Ned, as release manager of 3.6 and 3.7, what do you think about backporting this to maintenance releases?
While I think arguments could be made either way, this seems to me to be somewhat more of a bugfix (rather than a feature) in the sense that mock_open did not correctly emulate a real textfile open at least for an idiom that is commonly used (while acknowledging that mock_open does not claim to fully implement open or all classes of IO objects). The key question to me is would backporting this change likely cause any change in behavior to existing programs running on 3.7.x or 3.6.x. If yes, then we definitely shouldn't backport it. If not, then there is now the issue that people using mock_open on 3.7.x (or possibly 3.6.x) still can't depend on its behavior unless they explicitly check for, say, 3.7.1 or 3.6.7. That's not particularly user friendly, either. So perhaps it *is* best to not backport; if the functionality is needed in earlier releases, one could create a PyPI package to provide it, for example.
I.
The.
Thanks, Ned!
Anthony, I'm one of the maintainers of and I'd be happy to merge a PR that backports the fix to the PyPI version of mock.
Thank you.
New changeset c83c375ed907bdd54361aa36ce76130360f323a4 by Berker Peksag (Miss Islington (bot)) in branch '3.7':
bpo-32933: Implement __iter__ method on mock_open() (GH-5974) | https://bugs.python.org/issue32933 | CC-MAIN-2020-16 | refinedweb | 872 | 72.16 |
PDDocColorConvertPage - Convert color pages to grayDroptix Jun 24, 2012 11:49 PM
I want to convert a specified color page to gray programmatically using `PDDocColorConvertPage` with Dot Gain 15% as I would do it in Acrobat 9 Pro manually. In the API reference the syntax is explained but I don't understand it, especially `PDColorConvertParams`.
Is there anyone who can post a short example of code?
I'm trying to achieve this via COM-programming and Python. An example in VB, JavaScript or similar is also great.
Thanks in advance!
1. Re: PDDocColorConvertPage - Convert color pages to graylrosenth
Jun 25, 2012 8:14 AM (in response to Droptix)
PDDocColorConvertPage and PDColorConvertParams are C/C++ methods and data structures – you can get to them from COM or Python.
In order to convert the page from COM/Python, you will need to use the JavaScript method for color conversion. Details in the docs.
FYI: Acrobat can't be used on a server….
2. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jun 25, 2012 11:38 PM (in response to lrosenth)
I'm not the big C/C++ geek and always have difficulties finding the right parts in the docs. So can you post a link where exactly this is described?
Sorry, but I don't understand the mix between COM/Python and JavaScript... isn't it possible to use the `PDDocColorConvertPage` API call directly via COM? Also important for where to find this in the docs.
In the past I wrote some programs using COM to control Acrobat, e.g. to rotate, delete or insert pages. So I know all the basic stuff. I think I'm just looking for about 5-10 lines of code doing the right API call syntax. If somebody can help out with a short example this would really help me to go on. Thanks in advance.
3. Re: PDDocColorConvertPage - Convert color pages to grayKarl Heinz Kremer Jun 26, 2012 8:29 AM (in response to Droptix)
The Acrobat SDK consists of different parts:
- plug-in and PDF library API - these are functions you can call from
C/C++, but only within a plug-in or an application that uses the PDFL. The
API function yo are trying to use is part of this interface and not
available via COM.
- Inter Application Commuication (IAC) interface - these are e.g. COM
methods that you can call from your Python program
- JavaScript - an API to automate Acrobat with JavaScript programs that are
either embedded in PDF files, or stored on your computer.
In general, you cannot cross API boundaries, but there is a way to execute
JavaScript in an application that uses the COM interface. Take a look at
the documentation that is part of the SDK, this is all documented, and
there are sample programs that come with the SDK. You will however not find
any Python programs in the SDK - you need to map the use of the COM
interface from VB to Python.
Here is a link to a blog post I wrote a few years ago about how to use
JavaScript from within a VB program:
Karl Heinz Kremer
PDF Acrobatics Without a Net
4. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jun 29, 2012 10:58 AM (in response to Karl Heinz Kremer)
Hi Karl Heinz, I tried your scripts but didn't get them working... :-(
First, I wrote a simple JavaScript as you did and saved it to `C:\Program Files (x86)\Adobe\Acrobat 9.0\Acrobat\Javascripts\GrayConversion.js`:
function GrayConversion() {
var message = "The magic happens here...";
console.clear();
console.show();
console.println(message);
return message;
}
// add the menu item
app.addMenuItem({
cName: "grayConversion",
cUser: "Gray Conversion",
cParent: "Document",
cExec: "GrayConversion();",
cEnable: "event.rc = (event.target != null);"
});
When I restart Acrobat there's no new menu item. I also checked the tick box as you described here. As I'm using the German Acrobat, I also tried `cParent: "Dokument",`. Can you figure out what is wrong with it? Is there any name convention for the .js file? Does Acrobat automatically "scan" the Javascripts-folder and "includes" all of them?
Are you still sure that it's not possible to use the `PDDocColorConvertPage`via COM? I don't understand why some parts work and others won't. Is the API some kind of incomplete?
The other thing is also interesting that you said: do you think it's possible to call that API by an application using PDFL?
I did something similar to what you are describing with InDesign some years ago :-) So I appreciate for any tips and hints... I don't see the mistake.
5. Re: PDDocColorConvertPage - Convert color pages to grayKarl Heinz Kremer Jun 29, 2012 11:46 AM (in response to Droptix)
I don't have a Windows version of Acrobat 9 handy, I was only able to test
with Acrobat X, and the menu item does show up (after changing "Document"
to "File" because AX does not have a Document menu anymore).
The "Document" name is a language independent name, so you have to use
that even with the German version of Acrobat. Are you sure that your path
is correct? You can easily test that by running a JavaScript command (take
a look at this blog post: )
Acrobat will automatically load anything that has a .js extension if it's
stored in either the user or the application level JavaScript directory.
The different APIs are not compatible with each other - you can only call
JavaScript methods from within JavaScript (with the exception of using the
JS bridge), you can call COM methods only when using COM, and you can call
the Acrobat/PDFL functions only when writing an Acrobat plug-in or a PDFL
based program. You could write your own COM server in form of a plug-in,
and make other functions available via a COM interface, but the standard
IAC API does not provide anything besides what is documented in the SDK IAC
documentation.
If you have a license for the PDFL, then you can certainly call this
function (and any of the other functions that are part of the Acrobat/PDFL
API) from your own application.
Karl Heinz Kremer
PDF Acrobatics Without a Net
6. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jun 30, 2012 1:15 AM (in response to Karl Heinz Kremer)
Aaah, it was probably a permission problem because of Windows' UAC (user account control), so adding menu items via JavaScript was not allowed. I first tried to set this feature as a non-admin user using the `runas` command. But after logging on as the admin user showed that Adobe still did not allow JavaScript to add menu items...
Now, when using `Document` and after opening a PDF file in Acrobat (as admin user) the new command is visible as a menu item and it works.
First step done :-) Thanks! Also, app.getPath("app", "javascript"); shows that my JavaScript path is correct.
Now I have to find out how to convert a color page to gray using 15% dot gain. I found some code snippets doing color conversion with profiles... not the same but the right direction. I'll give it a try and still appreciate any useful ideas to achieve this :-)
This is really hard for me because I'm missing a full explanation/documentation of the `PDDocColorConvertPage` API, especially `PDColorConvertParams`.
[Edit:] Huh, my Python 2.6 script tells me this when calling `jso.GrayConversion()`:
com_error: (-2147467263, 'Nicht implementiert', None, None)
Meaning: "not implemented". Seems like it cannot find the JavaScript function `GrayConversion()`?
Everything seems to be fine until `GetJSObject()`. Here's my code:
import sys
import os
import win32com.client
def main():App = win32com.client.Dispatch("AcroExch.App")
PDDoc = win32com.client.Dispatch("AcroExch.PDDoc")
PDDoc.Open("file.pdf")
jso = PDDoc.GetJSObject()
print jso.GrayConversion()
if __name__ == "__main__":
main()
7. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jul 1, 2012 9:18 AM (in response to Karl Heinz Kremer)
Sorry, Karl Heinz, but I don't get it... even after 6 days of searching and trying. It's a big strain
My status right now is that I'm using VB script because Python's COM binding do something strange... with VB script the COM-JS-connection works fine.
I found this link that seems to be very useful and very close to what I need it's about the AcroColor extended API:
But I cannot "transform" it into JavaScript...
Pleeeeeeaaaaase, are you able to transform this into JavaScript code for me and post it here? I visited your homepage and it seems that you could do it because you have the experience and you understand what's going on in Acrobat. I'd really appreciate!
Also: is it possible just to replace `AC_Profile_AppleRGB` with `AC_Profile_DotGain15`? See the reference here.
8. Re: PDDocColorConvertPage - Convert color pages to graylrosenth
Jul 1, 2012 10:05 AM (in response to Droptix)
You are, STILL, trying to use the C++ APIs from JavaScript.
You want the JavaScript API at p.htm?href=JS_API_AcroJS.88.1.html#1515776&accessible=true
Look at the doc.colorConvertPage() API.
9. Re: PDDocColorConvertPage - Convert color pages to grayKarl Heinz Kremer Jul 2, 2012 6:39 AM (in response to lrosenth)
Just because you find a function somewhere in the Acrobat SDK documentation
does not mean you can use it in your program: As I said before, you cannot
cross API boundaries, if you have a function that is described in the
plug-in API, you can only use it in an Acrobat plug-in (or in most cased,
in an application that uses the PDFL). You need to stick to the API that
your application can use (e.g. as lrosenth pointed out, a function from the
JavaScript API when you try to either write a JavaScript program, or if you
want to use the JS bridge).
This is not a question of the level of experience somebody has, it's just
not possible to do that.
Karl Heinz Kremer
PDF Acrobatics Without a Net
10. Re: PDDocColorConvertPage - Convert color pages to grayphibbus Jul 3, 2012 8:57 PM (in response to Droptix)
I have a situation slightly different but similar enough in nature not to require a new thread. While I'm using Acrobat X on Win7, I think most everything I'm doing remains consistent in Acrobat 9, and it may help shed some common light.
In my case, I am trying to call "colorConvertPage" (the Javascript API version Iorsenth recommends, above, and not the "PDDocColorConvertPage" exposed by the C++ API) from a VBscript via the JSObject. My problem is that I cannot figure out how to correctly configure and pass the two arrays of "colorConvertAction" objects that the JavaScript method requires as its second and third parameters.
The purpose of the script is to merge large numbers of TIFFs into multiple PDFs, combining them based on a filename schema. After each document is created, I want the script to convert all of the page images to sRGB before saving.
I have been able to implement Mr. Kremer's method of creating a native Acrobat folder-level JavaScript to perform the color conversion and then calling that from the VBscript. The Javascript is taken almost verbatim from the example given under the "getColorConvertAction" method in the Acrobat X JavaScript API Reference, only changing the profile to sRGB and then adding the loop through all the pages and a slight modification of Karl's code to add the function as a menu item under Edit:
function ConvertAllTosRGB()
{
// Get a color convert action
var toRGB = this.getColorConvertAction();
// Set up the action for a conversion to RGB
toRGB.matchAttributesAny = -1;
toRGB.matchSpaceTypeAny = ~toRGB.constants.spaceFlags.AlternateSpace;
toRGB.matchIntent = toRGB.constants.renderingIntents.Any;
toRGB.convertProfile = "sRGB IEC61966-2.1";
toRGB.convertIntent = toRGB.constants.renderingIntents.Document;
toRGB.embed = true;
toRGB.preserveBlack = false;
toRGB.useBlackPointCompensation = true;
toRGB.action = toRGB.constants.actions.Convert;
// Convert the first each page of the document
for(var i = 0; i < this.numPages; i++) {
var result = this.colorConvertPage(i,[toRGB],[]);
}
}
// add the menu item
app.addMenuItem({
cName: "ConvertAllTosRGB",
cUser: "Convert Doc To sRGB",
cParent: "Edit",
cExec: "ConvertAllTosRGB();",
cEnable: "event.rc = (event.target != null);"
});
Once that .js is dropped into one of Acrobat's script folders, the function can be successfully called from vbs with "jso..ConvertAllTosRGB" (where "jso" is a JSObject successfully created by a call to a PDDoc's .GetJSObject method.)
Where I'm running into trouble is in trying to replicate that JavaScript function within the VBscript itself (which would be more convenient than having to distribute two separate scripts.) I can successfully create a colorConvertAction object, however I can't seem to successfully cast that object as one in an array to pass to colorConvertPage (nor create the empty array to pass as the third parameter.) The pertinent portions of the scipt are something like...
Dim AcroApp
Dim AVDoc
Dim PDDoc
Dim jso
Dim toRGB
Dim i
Dim result
Set AVDoc = CreateObject("AcroExch.AVDoc")
If AVDoc.Open("c:\path\some.tif"), "") Then
Set PDDoc = AVDoc.GetPDDoc
Set jso = PDDoc.GetJSObject
Set toRGB = jso.getColorConvertAction
With toRGB
.matchAttributesAny = -1
.matchSpaceTypeAny = Not .constants.spaceFlags.AlternateSpace
.matchIntent = .constants.renderingIntents.Any
.convertProfile = "sRGB IEC61966-2.1"
.convertIntent = .constants.renderingIntents.Document
.embed = True
.preserveBlack = False
.useBlackPointCompensation = True
.action = .constants.actions.Convert
End With
For i = 0 to PDDoc.GetNumPages - 1
result = jso.colorConvertPage(i, ????, ????)
MsgBox(result)
End If
Everything up to the call to jso.colorConvertPage works fine. However, as stated, no method that I've tried to pass the toRGB object as an array member (as called for by the second "actions" parameter) has worked. These include declaring the variable as an array and using the Array function. Nor have I been able to pass an empty array for the third "inkActions" parameter without throwing an error.
Interestingly, passing the toRGB object itself, directly, as both parameters does not raise any errors. However, the function then returns a null value and no conversion actually takes place.
11. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jul 4, 2012 11:58 AM (in response to phibbus)
I hope you were posting your answer a few days ago to avoid my headaches because one day before you posted this I found the solutionn for my problem in the right docs.
^^ As I mentioned it's not hard to code this but to find the right place is much more difficult... but now I got it. Thanks to everybody here!
phibbus, I think your way is too complicated. Why don't you rename your JS function from `ConvertAllTosRGB()` to `ConvertAll(toProfile)` and submit an additional argument? `toProfile` then would be either "sRGB IEC61966-2.1" or "Apple RGB"... just an example. Of course you could submit any argument to your function. Like this:
function ConvertAll(toProfile)
{
// ...
toAny.convertProfile = toProfile;
// ...
}
What exactly are you trying to change in your VB script?
12. Re: PDDocColorConvertPage - Convert color pages to grayphibbus Jul 4, 2012 6:26 PM (in response to Droptix)
Hi Droptix,
I'm glad you got it working. What method did you wind up using?
Yes, the JavaScript function could certainly be made much more versatile by rewriting it to accept the conversion profile and other of the colorConvertAction's properties as arguments. I was mainly just doing a quick retool of the SDK documentation example by way of illustration.
The thing I'm trying to ascertain in the VBscript is whether or not it is possible to successfully call colorConvertPage from an external automation script using the JSObject bridge (i.e., accomplish the same thing that the JavaScript example does without having to first install the folder level script.)
13. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jul 6, 2012 6:42 AM (in response to phibbus)
No folder level script is a cool idea and this works for me. Pleast test it and give a short feedback:
Set argv = Wscript.Arguments
inFile = argv(0)
outFile = argv(1)
strPages = first page (0) to gray
doc.colorConvertPage 0, Array(target), Array()
' save converted document to `outFile`
doc.saveAs(outFile)
doc.closeDoc(True)
' close document
PDDoc.Close
' exit and close Acrobat
App.Exit
' clean up
Set PDDoc = Nothing
Set App = Nothing
Now here's the batch file I'm using for this one:
:go
cls
cscript.exe /nologo GrayConversion2.vbs "D:\folder\subfolder\in.pdf" "D:\folder\subfolder\out.pdf" 1,3,6,7,8
pause
goto go
What I need to do now is to go through the page numbers and auto-convert the ones specified by the script arguments (1,3,6,7,8). Maybe you got a good idea?
14. Re: PDDocColorConvertPage - Convert color pages to grayphibbus Jul 7, 2012 12:47 PM (in response to Droptix)
Hi Droptix,
I tested your code, and yes, it did work for me (with the exception noted below.) I see that my mistake was trying to call colorConvertPage as a method of the JSObject, itself, rather than as a method of a JavaScript Doc opened through the JSObject.
I'm not quite certain what you're trying to do with the goto loop in your batch file. Do you have multiple .pdf files that need converting, or is it just that single file with multiple pages?
In any event, given the single call to GrayConversion2.vbs in the .bat, the following simple modifications to your .vbs should work (but, again, see the exception, afterward.) Modifications are in red:
Set argv = Wscript.Arguments
inFile = argv(0)
outFile = argv(1)
arrPages = Split array of Pages (adjusting for 0 first page) to gray
For i = 0 to Ubound(arrPages)
doc.colorConvertPage arrPages(i)-1, Array(target), Array()
' save converted document to `outFile`
doc.saveAs(outFile)
doc.closeDoc(True)
' close document
PDDoc.Close
' exit and close Acrobat
App.Exit
' clean up
Set PDDoc = Nothing
Set App = Nothing
That works when run from the .bat as you have it, except: The call to colorConvertPage fails on the fifth call for me each time. I'm using a test .pdf that I created, and I've tried varying the order and number of the pages to be converted. It doesn't appear to be anything specific to the content of the .pdf. After four successful calls (with the conversion visiblly completed,) the method fails on the fifth call and gives a "Server threw an exception" error. I must then hard-quit Acrobat from the Task Manager to be able to test the script again.
15. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jul 7, 2012 11:29 PM (in response to phibbus)
Ah yes my batch file... it's just a loop to re-start the batch after I press Enter... makes testing easier for me :-) Just ignore it and just use the single line instead... but then the CLI window closes and I would have to double-click it again... annoying :-P Sorry, I'm lazy
cscript.exe /nologo GrayConversion2.vbs "D:\folder\subfolder\in.pdf" "D:\folder\subfolder\out.pdf" 1,3,6,7,8
P.S. I've not tested it but I think there's a typo in your script mod:
doc.colorConvertPage arrPages(i)-1, Array(target), Array()
I think `-1` is not needed in the counter and also would cause a runtime error because `colorConvertPage` expects a zero-index and in your case you would start with minus one instead of zero. Also, `arrPages(i)` is a page object and I think you cannot do a math operation like this...
To get this whole thing working I tested a lot how to get `this` or JSObject's `doc` object from my VB `PDDoc`... finally I found out that the easiest way seems to be opening the file again in JSObject. It took me some hours but at the end you can achieve color conversion without having a JavaScript what I find is really great!
16. Re: PDDocColorConvertPage - Convert color pages to grayDroptix Jul 10, 2012 12:14 PM (in response to phibbus)
Ah yes, I got the same error when converting more than 4 pages:
GrayConversion3.vbs(70, 5) (null): Ausnahmefehler des Servers.
Server exception error... I also have to kill the Acrobat.exe process in task manager :-( OK then we have to use the folder level JavaScript file. | https://forums.adobe.com/message/4529093 | CC-MAIN-2018-17 | refinedweb | 3,384 | 64.3 |
Posted by Cory Mogk, 27 March 2012 8:00 pm
I'm proud to say that we've announced Maya 2013 and all the other cool Autodesk Media and Entertainment products. If you like high level details, please refer to the 2013 Digital Entertainment Creation Software Portfolio announcement. I think the most exciting part of this announcement is that we now have the Autodesk Entertainment Creation Suite Ultimate edition available. The Ultimate edition provides Maya, 3ds Max, Softimage, MotionBuilder, Mudbox and SketchBook Designer in one package. To facilitate all these products being together, we've made the following improvements:
Before we get to the Maya 2013 details, I wanted to share a couple of videos of the cool things the other products in the Autodesk Entertainment Creation Suite Ultimate edition have added. First let's look at the HumanIK interop support in 3ds Max.
Next up is CrowdFX in Softimage which also has a 1 click workflow with Maya.
And SketchBook Pro Designer, if you haven't seen it, is pretty cool.
Maya 2013 introduces the start of something we're calling Open Data. Open Data is about working with and managing complex data with natural, high performance workflows. Based on some of the comments I've seen already, it's important to note that this is work that we see benefiting all Maya users - not just large studios. It can be some technically challenging stuff and we've tried to approach it in a way that small studios can get access to tools that, in the past, only big studios had access to by developing for themselves.
In the rendering realm, we see a lot of improvements to Viewport 2.0 and mental ray for Maya has been turned into a true plug-in. This update to mental ray makes it easier for the rendering developers at Autodesk and nVidia to talk to each other and improve mental ray in Maya. For other renderers, this improves the hooks that they have into Maya.
We've done a bunch of other things, most notably with Nucleus nHair and Animation - on to the details!
Node Editor: The Node Editor presents an editable schematic of the dependency graph, displaying nodes and the connections between their attributes. It allows you to view, modify and create new node connections.
Alembic Caching: The Alembic file format is an open-source format developed for exchanging complex 3D geometry data. Alembic files are highly portable and application-independent so they can be shared, processed, and played back by many content creation applications. Alembic caches provide several performance improvements, including accelerated scene loading of large scenes, faster playback of complex character animation, and real-time play back of geometry data with topology changes. Alembic files also let you share large scenes between various areas of a production pipeline without the large memory overhead of fully editable scene files.
GPU Cache Node: The GPU Cache node draws the contents of an Alembic file in a light-weight, non-editable form directly on the GPU. This allows you to create more complex scenes in Maya without taking a large performance hit.
Attribute Editor: Now you can customize the Attribute Editor window in several different ways.
Custom Attribute Editor templates: You can edit the way attributes are displayed in the Attribute Editor by creating XML-based template files for specific nodes and node types. A template can have one or more views associated with it. Each view describes a particular display layout and can be used to tailor the interface for different purposes.
Custom callbacks: You can use MEL or Python based callbacks to link an attribute to a control or a complex script. Using the <description language ="cb"> tag in your custom Attribute Editor template lets you specify a callback command and links your callback to an attribute.
Use node type filtering to improve Attribute Editor performance: When making a selection in Maya, having the Attribute Editor open may cause performance delays if too many nodes related to the selection are displayed as tabs. To avoid a slowdown, you can use node type filtering to customize which related nodes are displayed in the Attribute Editor.
Creating attributes using attribute patterns: You can create dynamic or extension attributes using attribute patterns. An attribute pattern is a description of the dynamic or extension attributes that can be added to any specific node, or node type. Using this feature, you no longer need to create each attribute using individual addAttr or addExtension commands.
Editing the file path in the file browser: You can now edit the file path in the Look in field of the file browser and use its auto completion functionality.
File referencing options in the Outliner: A new Reference Node display option in the Outliner makes it easier to locate and identify all the loaded and unloaded file references in your scene. You can access the option in the Outliner by selecting Display > Reference Nodes. The Reference Nodes display option is on by default.
Create and manage file references: New Reference menu items in the Outliner let you create and manage file references without opening the Reference Editor. In the Outliner, click a reference node or referenced object to access file referencing commands.
Allow Referenced Animation Curves to be Edited: You can now edit animation curves from referenced files. These changes are managed by the reference node like other reference edits. You can modify an animation curve, such as changing tangent types or editing keyframes, then export the updates as reference edits to an offline file.
Updated reference node Attribute Editor: An updated reference node Attribute Editor displays information about reference nodes, such as file path, namespace, and sharing details.
Operations on multiple references: Using the file referencing options in the Outliner, you can now perform referencing operations on multiple references including loading, unloading and reloading, importing, locking and unlocking.
Preview unloaded content: A new Preview unloaded content option lets you view the hierarchy of the unloaded references in your scene without loading the reference in the scene.
Archiving unloaded references: An option has been added to scene archiving that lets you include files associated with unloaded references in the scene archive.
Merge into selected namespace: A new Merge into selected namespace option lets you choose to merge referenced or imported object namespaces with a namespace that exists in the parent scene. When duplicate namespaces occur, the namespaces are merged and duplicate object names are
incrementally suffixed with a number. This new option lets you keep duplicate namespaces and avoids an accumulation of new namespaces each time your referenced or imported objects have the same name.
Live character streaming: The options in the new Live Connection window (> Edit > Live Connection or File > Send to MotionBuilder > Live Connection) are an extension of the Send to commands that were introduced in Maya 2012. Now you can send your HumanIK defined character to MotionBuilder and establish a live streaming connection. This new workflow lets you drive your skeleton or Custom rig with motion capture data, so you can previsualize your retargeting result before baking the final animation from MotionBuilder into your Maya scene.
Thanks to Christian Bloch at for the Tokyo Big Sight environment!
Improved import and export with ATOM file format: You can now share and reuse animation more efficiently using Maya's ATOM (Animation Transfer Object Model). The .atom file type and its associated import/export options let you save specific poses or animation sequences, then easily reload them onto other objects. ATOM options let you set precisely which animation to reuse and how you want to import and export it. After exporting, you can import animation based on the character hierarchy, name matching, or using a template file as a filter.
Trax Clip matching: When manipulating clips of animation in the Trax Editor, new clip matching tools let you define an offset object to better align the movements in your animation sequence.
More easily set relative or absolute clip offsets: Updated clip Offset settings are now included in the Create Clip Options (Animate > Create Clip > ) and in the Trax Editor context-sensitive menu, letting you more easily view and set whether channels have an absolute or relative offset from the previous clip.
Retime animation: In the Graph Editor, the Retime Tool lets you directly adjust the timing of key movements in your animations. This tool provides a new type of timing manipulator in the graph view, letting you shift key moments in time, or warp entire sequences to make them occur faster or slower. For animators working in a pipeline with multiple Autodesk applications, similar animation retiming tools are available.
Converting CAT to HumanIK: The new Send to commands in Maya and 3ds Max let you convert a CAT bipedal character into a Maya compatible HumanIK character. This direct connection lets you transfer your character structure, definition, and animation from 3ds Max into an FK representation on a HumanIK skeleton in Maya. Any changes or new animation that you create in Maya can be updated on your original CAT character, so you can continue to animate in the context of your 3ds Max scene.
Stepped Tangent preview mode: The new Stepped Tangent preview playback mode lets you temporarily set all keys to display with Stepped tangents, switching easily from Spline to Stepped and back. Play your animation in this mode to get a quick view of object positions as they hit each keyframe.
Keyframe and Tangent marking menu updates: The marking menus available for editing keyframes and tangents have been updated to allow manipulation of motion trails, keys, and tangents directly in the scene view.
Camera Sequencer improvements: You can now create an ubercam for camera shots that are keyed with weighted curves. In addition, sequences with gaps between camera shots are now handled correctly.
Improved baking options: New options in the Character ControlsBake menu let you bake animation to a HumanIK skeleton, Control rig, or Custom rig. The Bake menu updates dynamically to display options that reflect the current character's state.
Playblast updated: Maya now supports H.264 Quicktime output on Windows 64-bit. In addition, audio and multi-track audio are supported.
Unified Character Controls: The new Character Controls let you perform multiple character setup tasks in a single window. As you set up your character, the previously independent HumanIK tools appear as tabs in the consolidated Character Controls, simplifying the character set up process. The Skeleton, Definition, Controls, and Custom Rig tabs appear as you select options from the Start pane, Source menu, or Character Controls menu button.
Start pane: Begin the character setup process using the Start pane in the Character Controls window. Whether you are creating a new HumanIK skeleton from scratch, defining an existing skeleton, or adding a Control rig or Custom rig mapping to your character, this pane is designed to guide you through the setup process.
Source management: The new Source menu provides feedback about the type of source driving your character. The Source menu is available in the Character Controls window at all times, regardless of the HumanIK tool that is active.
Custom Rig Mapping: The new Custom Rig tool gives you a visual interface for mapping your non-HumanIK rigs. Designed to streamline the mapping and retargeting process, this tool lets you map and retarget bipedal HumanIK character animation to and from a Custom rigged character. You can define your rig using the familiar click-and-assign workflow. Other controls let you save and load mapping templates and adjust the offsets between a Custom rig and the character's skeleton joints.
Customizable character layout: You can now customize the character layout in the Character Controls to fit your character. The layouts for the Controls and Custom Rig tabs are available as user-editable XML files, located in the new CharacterControls directory. Editing these files lets you create custom layouts. For example, you can replace the background image or change the position, quantity, color, and size of cells.
Improved controls for roll bone behavior: Updated Roll properties for the HumanIK skeleton definition are now available in the Attribute Editor to give you improved control over roll bone behavior as you rotate character limbs. See Define roll bone behavior.
Stance pose on body parts: Now you can force a stance pose ( > Edit > Controls > Stance Pose) on selected body parts. This functionality is useful during pose-to-pose character animation when only specific body parts need to be reset to create a new pose.
Continuous rig align: When manipulating your character in Full Body or Body Part mode, the IK and FK effectors of your character's Control rig now appear synchronized. By default, the IK and FK solutions visually merge to show the final solving of the character's skeleton. This feature replaces the Align Rig After Time Change option.
Heat Map Skin Binding: In the Smooth Bind Options window, the Bind Method options now include a Heat Map method. This method uses a heat diffusion technique to distribute weights, and generally gives better default results than the existing Closest Hierarchy and Closest Distance binding methods. Heat Map binding sets initial weights based on each influence object inside the mesh acting as a heat source, emitting weight values onto the surrounding
mesh. Higher (hotter) weight values occur closest to the object, and dissipate to lower (cooler) values as you move away from the object.
Paint weights for nonlinear deformers: You can now paint weights for the Bend, Flare, Sine, Squash, Twist, and Wave deformers. Select the new Edit Deformer > Paint Nonlinear Weights Tool menu item to use a Maya Artisan brush and paint point weights on your deformed geometry.
Move Weights improvements: When moving weights (using the Move Weights button in the Paint Skin Weights Tool or Skin > Edit Smooth Skin > Move Weights To Influences), the first selected influence now acts as the source influence and all other selected influences act as targets. If an influence is locked in the Paint Skin Weights Tool, it will not receive weights when you move weights from neighboring influences.
Extrude tool improvements: The following improvements have been made to the extrude tool: Allows for more precision with the Thickness, Offset and Divisions values; Uses the same precision settings as the Channel Box. Select Edit > Settings > Change Precision to set; Sliders have been removed so you are no longer limited to a maximum or minimum value; Background color added for improved readability; Respects the use of Ctrl and Shift to adjust the speed of changing the values.
Sculpt Geometry Pinch Tool improvements: The new Brush strength slider lets you achieve more pronounced pinching while sculpting your NURBS and polygon surfaces. The Pinch brush algorithm has been improved to provide smoother results.
nHair: The nHair hair generation system has been added to the Nucleus dynamic simulation framework. As part of a Nucleus system, dynamic nHair curves can self-collide and interact with other Nucleus objects, including nParticle, nCloth, and passive collision objects. nHair has many advantages over the previous hair system including: Performance improvements especially for hair systems with a large number of follicles; Nucleus-based solving for collisions and self-collisions that provide better collision accuracy and control; nConstraints that let you create constraints between Nucleus object components; nCaching for saving and playing back hair simulations.
MayaBullet physics simulation: Maya now includes the MayaBullet physics simulation plug-in. Built from the Bullet physics library, the plugin lets you use the Bullet physics engine to create largescale, highly-realistic dynamic and kinematic simulations. MayaBullet simulations can include interacting soft body and rigid body objects, as well as constrained collision objects, all contained in a single dynamic system within Maya.
Fluid nCaching improvements: The Create Fluid Cache Options window now includes a One file per geometry option, which lets you can select multiple fluid objects in your scene and create individual fluid nCache files for each object.
nParticle: A new Post Cache Ramp Evaluation attribute lets you determine how ramp attribute data is evaluated. When on, the ramp output is re-evaluated using the cached input attribute rather than the cached data. This attribute is off by default.
Particle count heads-up display: A new Particle Count heads-up display option lets you display the total number of particles and the number of selected particles (including nParticles and classic particles).
New Viewport 2.0 features: Viewport 2.0 now supports animation and rigging features such as HumanIK, joints, motion paths, ghosting and playblast. Image plane support is also included as well as a new depth peeling transparency algorithm. In addition, support for several other shaders and tools, and polygons, NURBS and dynamics features have been added. Furthermore, Viewport 2.0 now includes widespread improvements in tumble performance of large scenes
and in animation performance with large or complicated scenes.
New render passes added: Two new multi-render passes have been added: UV pass and world position pass. A UV pass converts UV values to R/G values and creates a rasterized version of UV space. Using a UV pass, you can replace textures in 3D renderings as a post-process without the need to track new textures in place. A world position pass converts position (x,y,z) values to R,G,B values. Use the world position pass for relighting workflows in compositing.
Mandelbrot 2D and 3D texture: The new Mandelbrot node allows you to texture your model with the Mandelbrot set. You can create a 2D version of this node (Mandelbrot), a 3D version of this node (Mandelbrot 3D), or shade a fluidShape node using the built-in Mandelbrot texture. The Mandelbrot set is a set of mathematical points in the complex plane, the boundary of which is an interesting fractal shape. Through this node, you can select the Mandelbrot Set, the Julia set, the Mandelbox set and other hybrid evaluations. Using this node, you can add interesting effects to your Mandelbrot set fractal, such as circles, leaves, points, checker patterns, and Pickover stalks. Choose among different shading methods and customize the range of the color values used to represent your
Mandelbrot set points.
New substance textures and functionality: You can now automatically bake a substance texture to disk to render it with mental ray for Maya, IPR, or other 3rd party renderers. The following new substance textures have also been added: Clouds_2_Animated, Impact_01, Make_It_Tile, metal_plate_009, Plasma_Animated, Space_Ship, Sunshine, Water_Drips, Waves, Windscreen_Glass_01. The new Make_It_Tile substance texture allows you to easily and seamlessly tile a file texture.
New callbacks command added: The new callbacks command allows you to extend the Maya UI with your own components. Use this command to add your own callbacks to standard Maya hooks without the need to overwrite Maya MEL files. Currently, Maya hooks are provided for the Hypershade, the Create Render Node dialog, and Attribute Editor templates.
Free image planes: You can now create a free image plane. A free image plane is an image plane that is not attached to the camera; one that you can select and transform in your scene. Select Create > Free Image Plane to create one.
mental ray rendering support for GPU cached Alembic files: mental ray supports the rendering of GPU cached Alembic files, including baked diffuse
color information if the GPU cache was used to create the Alembic file.
New mental ray BSDF shaders: Built in BSDF (bidirectional scattering distribution function) shaders from NVIDIA mental images are now exposed in Maya. You can find them by selecting Window > Rendering Editors > Hypershade > mental ray > Materials. For more information about these shaders, please see the mental ray shader documentation.
mental ray version 3.10: Maya now uses mental ray version 3.10.
Module support for plug-ins:.
Send to 3ds Max: You can send various forms of data, including geometry, animation, materials, and textures, to 3ds Max. You must have matching versions of Maya 2013, 3ds Max 2013 and FBX 2013 to use this command.
Live Update Service: Check for updates including Service Packs and Hotfixes using the new Autodesk Maya Update Manager.
Improved search in the Maya Help: The Maya Help now includes an improved search that queries a wider variety of sources with greater efficiency than ever before. Matches from the Maya Help documentation and other websites, like the Autodesk YouTube channels and forums, are included in your search results. Each match includes an excerpt of text, the name of its source, and the date it was last updated, so you can quickly navigate your results. Note If you search with the locally installed Help, you do not get results from online sources.
Updated Navigation Buttons: Clicking the new Share button lets you send a link to the currently viewed topic. This button launches your default email application and places the link in the body of a new email message.
You can now select the QuickTime movie (.mov) file format as a Render output for your Composite projects.
Autodesk MatchMover now includes Python scripting support. Use MatchMover's Script Editor and Script Manager to create and load scripts that process input and output data as well as launch interface commands.
Please only report comments that are spam or abusive.
94 Comments
Aaron F. Ross
Posted 27 March 2012 11:35 pm
Adnan ER
Posted 27 March 2012 11:47 pm
Pete Shand
Posted 27 March 2012 11:53 pm
Danyl
Posted 28 March 2012 2:15 am
But it makes too much new functionalities, they need to justify buying a new version (1000 dollars Oo) by integrated features slowly and pushing innovation away. That's easy for them, they have killed Alias Research and Softimage... (they have killed Maya btw).
They integrate widgets and one or two interesting features (trax blending and nHair). Maybe iRay for Maya 2014 or 2015? But seriously i am thinking to switch of 3D software, because Blender will integrate Bmesh soon and i see everywhere a true thing : "Blender's getting better faster than Maya."
For example the Hypershade in Maya is extremely slow, in Blender shader nodes and compositing will be managed by the GPU via OpenCL.
Many users do not yet know the possibilities of compositing, uv mapping and basic sculpting on blender with multires, which is why they are still on Maya. And do not know all the power of BGE :
Autodesk do not use money in the research, but for business, they are dependant of studios who innovates and makes plugins, and they bought to add in Maya.
An example of real great updates:
Brush and create hair and Fur :
DynaMesh :
MicroMesh, NoiseMaker, BPR filter...
PS : this is a minor version of ZBrush, imagine ZBrush 5. And, the most awesome is that it is FREE.
Fast, free GPU rendering and real time interaction support OpenCL/CUDA/Linux :
With soon maybe modeling in the render.
traden1976
Posted 28 March 2012 5:52 am
This is why i've switch to modo. modo 601 is such a superior modeler and now has all the rigging tools anyone needs.
I used blender for years before going to Maya(i switched because it's the app everyone used ). I was amazed by how much the rigging system in maya was behind blender's. One of the only features in maya that was better then blender was a good native renderer.
If you want a great app at a decent price, use modo (full license for the price of a maya update). If you want a great app for free, use blender.
Note: I would like to congratulate the Sketchbook team on a great product update.
Cory Mogk
Posted 28 March 2012 6:10 am
As iRay and Maya stood in 2012, we could not integrate iRay without it looking like a new renderer. If we couldn't make it a natural part of mental ray for Maya, we didn't see a big benefit in doing it. With the work this year to make mental ray for Maya into a standard plug-in, we're in a better place to do that. Whether we do that or not is an open question. We know some people would like that and we know some other people would prefer our resources spent elsewhere. These aren't always easy decisions to make and we know that they can be disapointing to some of you.
phoppes
Posted 28 March 2012 6:11 am
Cory Mogk
Posted 28 March 2012 6:15 am
The Maya team has actually grown this year. We have a large group of people working on Open Data which is an investment in the future of Maya. It's a big initiative and we wouldn't be doing it if there was no future.
We've also spent a bunch of time fixing bugs, as many people have asked for, and I will post more info on that soon.
Alias Research was many teams. SketchBook being one that has done a lot of cool things. The Research in Alias Research has grown a tremendous amount under Autodesk. You can see a whole bunch of stuff the Research team is working on at AutodeskResearch.com/projects.
Cory Mogk
Posted 28 March 2012 6:20 am
Yes, of course, XGen is running at Disney Feature Animation. There's a little more to it than just dropping something like that into Maya. Things like making it build on platforms other than linux and making it work outside of the Disney pipeline take time.
Cory Mogk
Posted 28 March 2012 6:21 am
We shared what we could at the time - we'll see about how we can improve things in the future
Cory Mogk
Posted 28 March 2012 6:23 am
Thanks very much for this - speaking for the whole team, we really appreciates it.
As for other feedback, we're happy to have that as well. Despite what some may think, we do care about Maya and our customers experience with it. Guidance on making things better for you always helps us make decisions for the future.
toha
Posted 28 March 2012 6:37 am
Cory Mogk
Posted 28 March 2012 6:45 am
This is supported in 3.10 so you can use it in Maya. I'll be upfront with you and say the workflow is a bit rougher than we would like.
Cory Mogk
Posted 28 March 2012 6:45 am
What kind of updates are you looking for?
Cory Mogk
Posted 28 March 2012 6:46 am
I'll post some more details on that soon
toha
Posted 28 March 2012 6:55 am
1 is support of custom meshes for twigs branches and flowers
2 ability to tweak secondary tertiary branches attributes
3 rendering without converting to polygons in mr
is first what come to my mind
fghajhe
Posted 28 March 2012 7:16 am
Is there support for implicit shapes/ deformer sculpts, and instanced particles in viewport 2.0 now?
Thanks,
vinc_B
Posted 28 March 2012 7:18 am
Maya definitely lose this "3d generalist" soft description today. Just be honest with that. The idea is to specialize the tools... so you can sell more tools. Right?
That's disappointed.
toha
Posted 28 March 2012 7:27 am
4 roots
5 trunk (with multiple trunks option)
6 ability to draw trunk by hand (like speed tree)
nbreslow
Posted 28 March 2012 8:14 am
I was hoping for a few more modeling updates. I saw one Artisan improvement listed but this is an area that can definitely use some work (Move Brush?). Also, an 'Insert Smooth Edge Loop' tool is overdue. And finally, I had hoped that the HUD overlays that were introduced in the 2012 Extrude Tool would make their way into other tools.
I second the sentiment that audio-less preview videos are frustrating. I know everyone keeps trotting out Modo 601 as the new poster-boy (it is kinda sweet, actually) but the guy who does their little feature playblasts does a nice job informing/entertaining. The Autodesk feature videos that do have audio are terrific, so thumbs up on those.
Thanks!
Cory Mogk
Posted 28 March 2012 9:52 am
Everyone who gets Maya 2013 will get the Alembic support - whether you are on subscription or not. It's important to note that this is the Alembic i/o as well as the GPU Cache node that does the super-fast drawing of Alembic data.
Cory Mogk
Posted 28 March 2012 9:55 am
The 1 click workflows should work where the products do, eg Maya and Mudbox exist on windows, mac and linux so you should have the workflow available.
Cory Mogk
Posted 28 March 2012 9:57 am
It will be soon - stay tuned.
Naqoyqatsi
Posted 28 March 2012 12:07 pm
dobert
Posted 28 March 2012 12:22 pm
Sorry I did not credit your HDRI is it amazing as is. Let me take the time to thank you for adding so much value to the 3d community with hdrilabs. I am always telling users and friends how amazing it is.
Daryl
Cory Mogk
Posted 28 March 2012 12:33 pm
Braden99
Posted 28 March 2012 1:14 pm
The classic Maya viewport does.
You mention some dynamic features are now displayable in VP2, what are these features?
Quite a decent release, though not amazing, it would have been nice for a small sprinkling of modeling features. Hopefully Maya 2014 is a bigger release. My wishes for the next release is: iRay, xGen, multi-threading, modeling, more node workflows, viewport 2 becomes default viewport by supporting almost everything (with old one as backup)
jkbbbx
Posted 28 March 2012 3:11 pm
by adding "new" features that most of us won't probably even use will just gonna create more bugs on every release.
it's sad to see now its just another "high-end" animation package for big productions/companies.
Remydrh
Posted 28 March 2012 7:04 pm
Unified Sampling alone (an iRay inspired technology) improves rendering performance nearly a magnitude from previous methods. The simplification of render setup (often bemoaned by non-techies) is worth the price of admission. All it takes is a simple UI change and nothing else. Modeling and animation tools, etc can improve productivity but not to the extent a simple render setup with fast turnaround can allow you to iterate and refine much faster on a visual product. I can use Unified Progressive to preview frames at 2k in nearly 10-15 seconds a (sub)frame on a regular workstation. To say nothing of 3k in 20-30 minutes for full frame raytraced renders at production quality. It's an amazing shame this is hidden in Maya and I'm hoping more people will expose and use it and finally ENJOY the process of making an image.
I'm hoping the new plug-in setup will see many more updates during the year unlike 2012 which languished behind 3ds Max and XSI in feature updates.
These updates alone would improve the likelihood a customer would upgrade rather than stay with the same piece of software year after year and opt for a cheaper purchase of a rendering license of something else.
T. I. Burbage
Posted 28 March 2012 10:58 pm
van_der_goes
Posted 29 March 2012 1:54 am
On the other side, nice work on the animation part.
rooftop
Posted 29 March 2012 2:47 am
"mental ray for Maya has been turned into a true plug-in" - does this mean Mental Ray won't unload itself when Maya crashes?
Will Composite ever be able to import video files?
mahdilal
Posted 29 March 2012 7:57 am
mahdilal
Posted 29 March 2012 8:02 am
Danyl
Posted 29 March 2012 8:29 am
In Mental Ray you haven't a direct feedback of your lightning setup, shadows because the render is slow. Also, it's hard to get a good render and impossible to have something photorealistic it stay biased.
Artifacts appear and many areas are darkened and requires and advanced user to be fixed who know all features of Mental Ray.
Here an open room (Naboo) rendered with Mental Ray by an average user (me). They still a lot of problems artifacts and dark areas. The lighting is definitively not photorealisc even with FG, GI, Sun and Sky and others parameters. And fix this is also very hard, must add additionnal light, control bounce, etc.
Now the same scene, i've kept the sun and sky lightning but i render this time on iRay. 0 parameters, 0 skill needed :
Without surprise the lighting is photorealistic, no dark area the light bounce infinitly all is bright. Stay noisy, but after 10 minutes of rendering in 1920*1080 this is acceptable (and it's JPG) on a GTX 570.
After a quick compositing of my iRay render:
Maya 2013 is not a bad version for persons who want to do characters animation with motion capture (retargeting, trax blending to combine, motion trail to adjust, animation layer to create an overlayer). But features really expected, awaited from a long time are still not there, and no improvement in modeling, PaintFX, etc. For example sculpting mutli res, pixar subd, etc. Hypershade is slow even on a ssd!
You will lose a lot of users if you continue. Yet I loved Maya until Alias Maya 8.5 PLE . :'(
But caution Autodesk, Zbrush, Modo, Blender, Houdini and C4D growing much faster.
For persons who want iray on maya (bugged, no support for textures), here the trick :
And also can you make an overview of XGen ?
Cory Mogk
Posted 29 March 2012 8:56 am
We know that iRay brings value to people's workflows. We also know that mrfM needs work. We don't want to hack iRay in at the expense of user experience. Our approach has been to fix the base so that we don't continue the problems of the past and possibly make them worse.
vinc_B
Posted 29 March 2012 9:06 am
Since Max appear to be your only real modeling/rendering tool, the interop max/maya is limited to window only. So your suitish strategy is not appropriate for lot of your maya customer(the non windows one and all the other that dont want to learn several big tool to work). You expect user to use several soft to work. So, as i have to search fonctionnality outside of maya to progress in my work, I'll not search this fonctionnality in another autodesk soft.
Maya is not anymore a tool for 3d generalist.
Danyl
Posted 29 March 2012 9:07 am
Ok, you want to fix things, but when this will be fixed? Why Pixologic can integrate so many new features at the same time on ZBrush?
His integration in futures version of Maya is scheduled?
Why 3ds max, Cinema4D and CATIA have iRay from two years already? iRay 2.0 is already avalaible...it supports animation and is even faster, one more reason to adopt.
Even if it is not stable or that there are compatibility issues, listen the needs of users.
Our needs are your priorities! Especially when everyone harasses you with one feature : iRay (PaintFX, modeling and XGen behind).
Remydrh
Posted 29 March 2012 9:10 am
As opposed to customers passively (or aggressively) hammering forums, what can be done constructively to resolve this?
"Our approach has been to fix the base so that we don't continue the problems of the past and possibly make them worse."
Ok, what steps can be done to complete this fix? What areas specifically need work or solutions? Are you getting enough testing or feedback for features? Are there concerns you perceive from existing features in mental ray? I see a lot of complaints about non-exposure of new features, some of which are designed to eliminate the problems (or replace) old features. Are these features causing integration problems?
Can you leverage your partners more? It's my understanding that resources aren't very high for rendering at Autodesk. There are quite a few of us that are very interested in helping you improve this area but for one reason or another are getting nowhere.
Cory Mogk
Posted 29 March 2012 2:20 pm
Turning mrfM into a true plug-in was a big chunk of work and it was late in the development cycle when it was completed. For the bit of time between when we completed that and locking Maya 2013 down, my understanding is that the teams, Autodesk and nVidia, were much more productive than previously. At this point in time it was fixing bugs but it seemed to go well (a higher than average rate of bug fixing) so we see that as a positive improvement.
Can we give the plug-in to nVidia? That's a fair question that a number of people have asked. I don't have a solid answer for you at the moment.
In terms of working with our customers, feedback is always welcome. Bug and enhancement submissions always help (find the link under Maya's Help menu). Beta testing is another option and you can sign up at beta.autodesk.com.
Remydrh
Posted 29 March 2012 2:39 pm
I noticed new drops were accepted into 2013 much later than usual, which we appreciate very very much. We're hoping this continues with the new plug-in structure! We're very excited at the possibilities.
I would love to give you my experience with beta.autodesk.com but will not do so on the public forum. . . . But would love to share it. You can find that I am signed up for the Beta already.
--David Hackett
LSchock
Posted 29 March 2012 6:42 pm
Is there a change in the animation playback frame-rate of a skinned character within the Maya viewport to be expected with the new version?
...especially when working with HIK in Maya compared to directly in Motionbuilder (e.g. currently I am having 20fps for 1 cut-up character vs. 100fps with 4 skinned characters in MB)
What's the plan with ATOM vs. .fbx vs. .anim - they all seem to be the same thing in different flavors, what's the difference and what will be the future. Do any Game Engines support the Atom format?
LSchock
Posted 29 March 2012 6:45 pm
interacting with the HIK rig in Maya currently seems way slower than when doing the same thing in MB - will this change in Maya2013?
Will the HIK allow squash and stretch? In MB it can be faked with scaling the Forward Controllers, but in Maya it doesn't work
Danyl
Posted 30 March 2012 4:25 am
Thanks.
Cory Mogk
Posted 30 March 2012 11:59 am
We have isolated some performance issues with characters but the fixes are not in 2013.
Squash and Stretch is not in HIK.
Cory Mogk
Posted 30 March 2012 7:48 pm
This has been in for some time with the Smooth Mesh Preview mode (1/2/3 hotkeys on a poly mesh). I had to check and it appears to be Maya 7 ()
T. I. Burbage
Posted 31 March 2012 1:01 am
Cory Mogk
Posted 31 March 2012 4:40 am
Danyl
Posted 31 March 2012 8:51 am
In each Maya release their is ton of bad reactions or harrass because what we want to see still not avalaible since a long time and we've never an answer or maybe just an excuse "we need to fix this first" (still don't know what is planned). But a new version of Maya is not a service pack we need more to improve our workflow.
And my review of Maya 2013:
Originale review (in French) :
toha
Posted 31 March 2012 9:47 am
And what benefit mortal user can have from it?
Cory Mogk
Posted 31 March 2012 11:23 am
ATOM is meant to replace .anim; ATOM and .anim are focused on animation while FBX supports lots of other data and interoping between products. ATOM and .anim are more for work in Maya.
ATOM is XML based and uses the same description as the custom AE Templates and Asset Templates. One of the points of Open Data is that we make the solution open so that people can tweak the data and/or the way the data is created.
Cory Mogk
Posted 31 March 2012 12:04 pm
One of the first thing is that you have access to the Alembic file format in Maya. Alembic is a very compact file format so you get a lot of data with a small memory footprint. You can use Alembic for a single object (like you would OBJ) or you can use it for animated objects like characters or particles (like with the nucleus cache).
The second part of Alembic is that you get the GPU Cache node. In the movie we show 500 cities with 21 billion triangles. If this was in the standard Maya representation it would be very slow, if not impossible (depending on your hardware). The data in the GPU Cache node is non-editable (you cannot move points on the mesh) but you can snap to it. If you are building something like a large, complex city, this is very powerful.
We are also calling the File Referencing improvements (access to file referencing in the Outliner) part of Open Data. If you are working with a large dataset, this makes it easier to break it into chunks and selectively load the most important pieces so that you keep a high framerate in Maya.
I'll post some more detailed movies of this soon.
Cory Mogk
Posted 31 March 2012 12:05 pm
I have about 15 pages of bugs that we've fixed in Maya 2013. This is a big request - fix more bugs - so we've done that.
We are looking into setting something up like the 3ds Max team has done with User Voice. We think it's been successful and would like to use that as a way to get more direct feedback from the community.
3Dmonkey
Posted 1 April 2012 1:20 am
Imagine having a public system where all those bugs you report are aggregated and exposed to the users.
People could see if a bug had already been logged and confirm the the issue and share additional info (severity, example scenes etc.), instead of logging a separate bug of the same issue.
We used to have something similar at thnkr.com before Michiel pulled the plug out of his own frustrations. It was great to be able to discuss issues and confirm problems with others (it felt like it was making a difference anyway).
I'm guessing there is something like this if you're on the beta, but we all know the bugs don't stop when the product ships right? ;-)
Trust the users.
We all want a better Maya.
Danyl
Posted 1 April 2012 3:16 am
Anyway, if you continue to ignore users in each new version, remember even if you bought the three largest 3D software (max, maya, softimage), software less expensive exists and begins to integrate the essential functionality and exceeds in certain areas. Blender is better than Maya for sculpting, for uv mapping, for compositing (based on OpenCL) and previewing with OpenCL/CUDA render compute (but cycle is biased and iRay is not a reason to integrate).
But if you create an User Voice don't do like Unity. Their is tons of votes for Unity on Linux since 2009, but none of the developers responds at this suggestion :
alshalan
Posted 1 April 2012 4:28 am
iam happy with that
thanks mogkc
Cory Mogk
Posted 1 April 2012 6:17 am
I'm glad to hear there's support for this. Let's get it up and running like the 3ds Max team and then explore what else we can do.
In terms of a public bug board, it's not that easy to do as a lot of the information customers send in is confidential.
Like CER's (Customer Error Reports), the more times we hear of a bug, the higher the priority gets. Never assume we know about an issue. Even if we know about an issue, additional reports can help us in reproducing it and/or making sure we fix it from the proper angles.
If you want to get the status on an issue, the Support team can update you.
Nimadv
Posted 1 April 2012 10:11 am
illincrux
Posted 1 April 2012 5:42 pm
Blender is just one tool away from me switching for good...
3Dmonkey
Posted 2 April 2012 2:08 am
I think if people could in some way see their submissions in the context of a user community, where others can post workarounds or fixes, see the status of known bugs and answer requests for additional information about a given bug, then you empower users and more effectively leverage the vast user-base, who all have an interest in improving the software.
I think generally Maya users love using it, but are frustrated with the bugs and un-realised potential. How can we open it up the community to help make Maya even better?
Danyl
Posted 2 April 2012 4:47 am
How long modeling, Paint Effect haven't been improved?
Some say that nHair of Maya 2013 was ready for several years but Autodesk has deliberately delayed its release, as with other features. A lot think that you push and delay upgrade to do the minimum.
Especially as Maya are extremely expensive, an update of the software costs the price of Modo or a powerful computer or small renderfarm.
ILM and ZOIC ( ) begin to integrate modo. Small studios integrate modo, blender, C4D or Houdini.
The source code of Maya is crap, full of bugs and patches who create another bugs, not suitable for some project. Blizzard has even thought of switching of Mental Ray to Renderman for the sake of stability for StarCraft II cinematics.
Clearly Maya is not the future of 3D (if you continue) : we must switch of software? Or you will begin to make Maya a great program like it was? Remember Autodesk: Alias Maya was the number one 3D programs with the most advanced features during more than 10 years!
For example Pixologic have released a bug fix version : release 2b, but have integrate tons of features like FiberMesh who beat Maya Hair/nHair for brush hairs and make hair style.
So can you imagine the incoming ZBrush 5? Specially if the render engine of a sculpt program become more powerful than the engine of an animation program! I laugh in advance by imagining what's new in Maya 2013.5/2014. I even think that it's useless to follow Maya developement because he will soon be far behind others programs. Can i say "RIP Maya"?
Shame on you Autodesk you're triying to kill inovation, progress just for dominate the market and practice your prices. Users are nothing for you! You just see money everywhere!
How many award have you won Autodesk? Alias Research had 11 awards for Maya before you buy it!
And Maya at his price is not complete, some users need to switch into 3ds max for iRay rendering, Blender for UV Mapping and compositing, Modo or ZBrush for retopology. And now ZBrush to make hair style, pose, fast UV Mapping, Decimation.
Since Maya 2008 i heard : need to fix this before. Before what? Buying a new plugin or demo plugin like DMM and Craft Animation to put on Maya's next version?! And why there is more bugs in each new versions? Bad plugin integration...
So you have fix things since Maya 2008 (since you have bought it). Now 5 years later can you create a great Maya 2014 VS Modo 701 battle with all the warning and suggestions you have received? Can you innovate for the first time?
I am just waiting for the last time, what Maya 2014 will be. If there is nothing new for modeling, rendering, etc. I will change of program like a lot will do (someone have already changed).
It will certainly be much easier for our workflow to work with another(s) 3D program than Maya in the future... because for now we must use an other program for each task. I use an animation program but i must switch to do physically correct and fast rendering with my GPU...
I don't want to insult Autodesk team, but I'd really would to shake you, because persons paid to kill inovation aren't human being. Especially in an area that needs to evolve a lot, like next gen incoming consoles and futures movies.
Emmanuel31
Posted 2 April 2012 6:14 am
if you don't have time for doing a true inovation , you have to think ok this year we rewrite source code of maya and next year we built innovation (modeling rendering dynamics)with realistic intégration , and no plug in and it's ok for me i stay i can work two years with maya 2011 or 2012 without update , but be honest !!!! you are at the limit , the money make you blind
Silo have very great tool for modeling also , it cost 100 dollars !!!! You have to sse what appening around you
Cory Mogk
Posted 3 April 2012 4:14 am
I wonder where you heard such a thing - it is definitely not true. Why would we hold something back if it was ready to be used?
Swakaj
Posted 3 April 2012 5:59 am
pitomator
Posted 3 April 2012 9:54 am
Whats with modeling are you all meant to get 3ds max for that?
I mean no offence to the Autodesk team, but I sure would like some modeling tools(and Iray).
el.mustafa
Posted 3 April 2012 12:48 pm
Looks like 2015 or 2016 will be the upgrade to get..
Anyone else noticed the SketchBook video above??
That just blew me away !!!
Wow
mdfisher272
Posted 3 April 2012 1:48 pm
Currently, I'm working 13 hrs/day, 7 days/week. Jumping back in forth between software consumes more time that you can imagine.
In my off time I'm learning Houdini. I'm tired of trying to deliver on a deadline with Maya's ancient particles that look so-so when rendered(IF they render). I just had to spend the last 10 minutes at work trying to find out why my coworker's particles weren't inheriting color, and he probably wasted 30 mins before he spoke to me. Turns out, my friend had to restart. A BUG.
I still frequently get crashes when the scene has nCached particles and I press the "Go to start of playback range" button. This occurs whenever my playback range begins at the same frame as the cache. This has been a frequent problem since nParticles existed. The next time I get a scene with this problem, I'll send it to you. (I'm on 2012 sp1 BTW)
I've had to fix the "clear initial state" script myself. The script crashes if it tries to clear a PP attr that has an incoming connection. And don't get me started on "mel" programming. Heaven forbid if a line in your userSetup.mel file errors out. Everything after that line goes "bye-bye" on startup.
nParticles is just a kludge to address issues that should be addressed by implementing a more modern dynamics system. Internal ramps? Are you kidding me? A node is making connections into itself and you can't see them in any node editor?
Maya should not be a "fixer-upper". FumeFX and Krakatoa are eating Maya's lunch. Try to simulate an inferno with Maya fluids and watch your life creep away -- one minute at a time.
I don't know who's to blame for the lack of any true dynamics innovation on the Maya front. I don't want to be forced to use more than 2 different apps during the course of my job.
royterr
Posted 4 April 2012 12:34 pm
fghajhe
Posted 4 April 2012 4:53 pm
Hejl
Posted 5 April 2012 1:48 am
this year I wanted buy for my company YOUNG & RUBICAM new version of Maya, but know
it looks that is time to say GOODBYE to MAYA and switch to MODO you IDIOTS.
Thanks for Maya 2009.
Paranoiker
Posted 5 April 2012 2:48 am
Danyl
Posted 5 April 2012 7:45 am
Autodesk you have a last chance, the others software aren't powerful as Maya in some areas, so keep your advance, and focus on modeling, rendering, Paint FX and performance with more GPGPU features.
Because they stay iRay that i hope to see, Blender Cycle is biased, Modo render is biased (but almost unbiased). You can add retopology, multi res sculpting, environment generator (XGen), video game editing (Skyline), real time fluid (Nvidia Maximus), you can also do a native compatibility with Kinect on Maya 2013.5 on Windows 8 for Motion Capture for all, and face capture (with HumanIK's retargeting this can be awesome), etc.
And I fell in love with Maya and 3D since i've seen the first pixar's movie Toy Story, hard to leave the software, but I am saddened to see how the speed of development was reduced on my favorite software since you had buy it!
Can you tell us when the most awaited features, waited from a long time, will be released? iRay. Each time i see iRay renders on youtube, the GTX 680 with 1536 CUDA cores and move to Mental Ray after i cry...
Cory Mogk
Posted 5 April 2012 3:11 pm
It's true that there are more animation features in Maya 2013. I wouldn't count Maya down and out in other areas. I wish I could say more but right now you'll have to trust me. I'll try to cover some things in upcoming posts to show off some more of the other new tools.
royterr
Posted 6 April 2012 1:34 pm
I would definitely (without any doubt) count Maya out when it comes to rendering/modeling. To be honest, I have been hearing "I couldn't say more" from Autodesk develepors since Maya 2010 pre release:
Really, what does it take to fix the MR integration (make it normal like in Max or Softimage) and fix old bugs that have been hanging for ages. 3D world, 3D artist magazines, CGsociety and 3DTotal sites also have been giving Maya a bad modeling/rendering review since 2010 (simple exemple:) and yes Viewport 2.0 and other commercial features are not important as fixing Maya.
-the whole integration is broken, when MR options override maya in certain shaders/situations, the maya options dont get grayed out and thats everywhere.
-the light nodes are confusing in mr4maya, u start with a maya light then add a mr shader and start tweaking options every where, and plugging/unplugging theses diffrent mr light shaders for different maya lights specially for maya/mr area lights....
-set up time or (shader/lights/render) takes a lot of time compared to max/xsi, and producers/art directors dont like this at all.
-the color management system is still broken (color swatches dont get gamma corrected for no reason at all) in max its just one click and this was done 3 years ago
-2d motion blur doesnt work with particles (crash) and 3d motion blur doesnt work with moving fluids.
-Certain shaders available in max/xsi just arent in maya like the essential glare shader.
-mia_materials can get unstable and present weird shadows and FG artifacts when you duplicate them in the hypershade.
-alot of workflow bugs, u cant use this with that or that with this just try to use scanline with fur, displacement and motion blur in a medium scene (mr4maya will become unstable) or just try to use a mia_material could go one for a while...
- baking and displacement generation don't support multithreading (it'2012!)
I have another 20 or 30 bugs in my mind right now (out of the hundreds that exist) and I could go for a real long time like this...
with each release i keep telling myself that the next one will be "THE" release that will bring Maya next Max in the race.As a 3d artist that works alot on modeling/rendering, i have been repeatedly disappointed since Maya 8.5 (this was the last time i went WOW!).
Why can't there be a program that's good for arch modeling, has a good tree system, good rendering engine?
Max is an atomic bomb when it comes to modeling/renderingh specially the max/vray combo with all the plugins like VrayPattern, VrayScatter,Autograss,ForestPro,ghosttown and we don't even have 1 similar plug in for Maya.
Once thing is for sure, Maya is starting to look like a caveman in front the modern max and its "Excalibur" project.I don't understand why Autodesk is doing this, i mean they have no interest at all in doing so.I really don't understand why they are doing this, the only logical explanation is that they are orienting Maya towards animation and they don't want it to be powerful in modeling and rendering tools (btw modeling tools in Maya are a complete joke compared to MAX or Modo).
The reason i say this its because its becoming more and more aubvious, they can make Maya the number 1 tree generator just by incorporating accurate pfx tree presets (just like prodan`s trees) but I guess viewport 2.0 is more important : it's just a viewer for god's sake, we should have that by default.
Thysanura
Posted 8 April 2012 7:37 am
Quoting "User Guide + Basics + Interface overview + Marking menus": "Marking menus are very fast for experienced users because once you get used to showing them and the positions of their items, you can select the items using very quick gestures with the mouse or tablet pen, sometimes so fast the entire menu won’t even display."
Then you have the same selection-type sensitive marking menu "Shift+RMB" (with no selection) to create polygon primitives at nearly the speed of thought. All these other 3D applications have "press big button on this window over here that's hogging up screen space" compared to Maya's "gesture through polygons marking menu to create any primitive in less than a half a second" without 12 hotkeys, just one "Shift+RMB", and you can have the entire UI hidden (Ctrl+Space) in Maya and just use the Hotbox and default marking menus for polygon modeling and normals editing.
I used to use 3ds Max and having to assign dozens of hotkeys for Edit Poly was a hassle, the quad menu wasn't bad though. In Maya you get a default set of marking menus (Shift+RMB and Ctrl+RMB) that are just awesome for polygons, with no need for customizing. In 3ds Max (in 2009 at least), you had to move the mouse over to the Command Panel and "Ctrl+Click" on a selection type (faces, edges, etc), to convert the selection. In Maya, you just press "Ctrl+RMB" anywhere in the viewport and then quickly convert the selection with the polygons marking menu rather than clicking on a huge panel taking up screen space.
There is a genuine need for having more modern polygon tools, but I often see a lot of users who appear to not know what the software offers in terms of marking menus and the productivity they bring (judging by the thousands of "modeling" videos on the internet for Maya, in which marking menus are completely ignored), and then complain that "modeling is slow" (which it is if you're not using marking menus), and then want Autodesk to change the software to the point where we'll get some huge "ribbon" thing at the top of the UI hogging up screen space when all we really need are just modern polygon tools in that Edit Mesh menu and improvements to the old ones, along with incorporated edits into that fantastic polygons marking menu. Local-based symmetry would be great too; a lot of people have voiced their needs for modeling improvements so I won't repeat much. To Autodesk: Whatever modeling improvements you make in future versions, please don't forget about the marking menus; don't "depreciate" that workflow, because that's a significant productivity feature that Maya has that the competition doesn't, and oddly enough makes Maya's relatively old polygons toolset a pleasure to model with (with polygons at least) despite other software (such as Modo) having much more modern poly tools.
The marking menus in Mudbox are kind of slow and not as responsive compared to Maya's, but they do allow you to fly through them (as you can in Maya), so they're good. In Maya for example, the vertex skinning marking menu added in 2011 is very well done. You can hide the entire interface (Ctrl+Space), and use the right click marking menu on joints in the Paint Skin Weights Tool, then use the "U+LMB" key marking menu to change between Replace, Scale, Add, Smooth and the N key to control the intensity of your value, then B hotkey for brush size; Alt+F to flood, Alt+A for wireframe toggling; so there's only 5 hotkeys to remember to be able to skin extremely effectively and fast, all are default, and even better; the Artisan hotkeys are consistent across other tools such as Sculpt Geometry Tool, and well, many others. It's another reason I really like Maya; consistent, organized interface and same hotkeys for similar tools (Artisan). I think the Nucleus stuff is looking really good by the way.
I know Maya has its share of problems, personally I'd like to see significant Mental Ray updates, such as a blend material (using mib_color_mix is okay, but not as feature rich as a true blend material is), Unified Sampling and MIP shaders officially supported, iRay, and things along those lines, along with better render passes support. It's annoying to have to construct a funky network just to get sss_fast_skin_maya's 3 different subsurface effects out, that one should be converted to _x_passes. I do like how nodal Mental Ray in Maya is though; that mia_envblur is a node rather than as a checkbox option like it is in 3ds Max. For example, attach a mib_blackbody to the Whitepoint to a mia_exposure_photographic and you have color balance in your scene. It's little things that make the actual design intent of Mental Ray in Maya quite good, but there are problems in implementing it all like photon intensity not matching light intensity and of course, many bugs including odd workflows for certain light shaders that should be automated, and unfinished AE templates for many Mental Ray nodes (Write Operations not be displayed as worded options, etc). Proper gamma-encoded color swatches would be nice too, along with having more modern features that Mental Ray offers. That Autodesk has converted Mental Ray to a true plugin brings me hope that maybe it will be better done in future versions of the software.
On the plus side, 2013 seems like a relatively solid release in other areas from reading this announcement. I'll be looking forward to reading the "What's New" section in the Help documentation while downloading the free trial and giving it a test run.
seifneo
Posted 8 April 2012 8:35 am
Maya 2013 Docs
before downloading the free trial :-)
Thysanura
Posted 8 April 2012 8:57 am
mjmurdoc
Posted 8 April 2012 10:11 pm
I really wish poly modeling got more attention... I pretty much use all marking menus to model but I've been drooling over modo's features for a while now.
Maya needs:
- real re-topo tools ( like snap to mesh, I think max has this... just go cheat off their homework, I won't tell Teacher.)
- real symmetry tools (mirror merge never works, and how about symmetrical modeling?)
- mesh cleanup / decimation is still in the stone age (it's currently my favorite way to crash maya)
Summary: When most of the modelers I know do their modeling in non-autodesk products (modo, silo, 3dcoat, zbrush) there's a problem.
On a nicer note, really looking forward to the node editor, although I don't understand why the Maya team didn't just redesign Hypershade. All the functionality is already there. (minus the know-what-the-hell-is-going-on simplicity of the Node Editor) Do we really need a new editor for something that is already there? Still, I like it. It's pretty.
Questions:
Will Mental Ray materials show up in the node editor?
Will they actually look good? (They look terrible in Hypershade, requiring lots of extra "test" renders to actually get the material right.)
C4D's material system actually shows you what you're going to get when you render... just go cheat off their homework, I won't tell Teacher.
Other than that, good work with the animation/rigging improvements: Lots of great stuff here! This is where Maya really shines!
(Other than the mind-blowing awesomeness of marking menus! Seriously, marking menus with my wacom tablet are probably the one thing keeping me in Maya.)
RROCHS
Posted 10 April 2012 4:57 am
Thysanura
Posted 10 April 2012 5:49 am
Yeah I agree, this is a significant update. In my opinion, Autodesk has done a really good job on the QT interface, among other things. In Maya 2011 it was, for example, slow to load up the Hypershade, but in Maya 2012 SP2, the general interface speed in opening the Hypershade increased by about twice of that it was in 2011. I never measured it specifically but the speed of drawing the "heavier" (Visor, Hypershade, etc) windows in Maya 2012 SP2 is perhaps almost 70% (on a blank scene) as fast as it was in Maya 2009 (subjectively speaking), and fully drawing windows is a very rare process; you don't constantly open the "heavy" windows all the time after all, so the detriment to productivity is negligible.
I suspect that Autodesk will keep doing improvements and though there's still some UI quirks to iron out, I think Maya is looking very good and if the developers can perhaps improve modeling and rendering in future releases, Maya will once again be a really solid generalist tool; it still is, just has some older tools that need updating, and that funky Mental Ray could use a fresh integration. Maybe the developers could collaborate with some studios, such as "Oktober Animation"; what they've done with their "MentalCore" plugin (an implementation of Mental Ray for Maya) is nothing short of astounding, and to really get something like a solid rendering implementation, real-world testing is a good way to go. I'm just tossing out ideas, I know Autodesk can't officially comment on potential future improvements and things of that sort.
Paint Effects alone is one of the most powerful artistic tools I've ever used (though the learning curve is a bit high, which is understandable considering how much control you have). Adding to what someone else here said, if maybe 100 or so realistic plant and tree presets were incorporated into the Visor, with little to no changes to the core of Paint Effects, that would be quite significant, though updates to the core would of course be great too. I've created some plants and trees with Paint Effects from scratch, and it's very powerful, but unless you have a lot of free time to learn the majority of the nearly 300 settings for Paint Effects, you're not going to get really good results fast. So maybe some updates to Paint Effects (with modern presets in the Visor at least) would be in order eventually. Just an idea (not really mine actually) among many others in the comments section here.
I also want Viewport 2.0 to continue being updated in future releases, to the point where one day we can all just default to Viewport 2.0 and know that everything the old viewport supported is supported in Viewport 2.0 as well. The updates to Viewport 2.0 in Maya 2013 are very good considering all the other work that was done in other areas of the software.
There's nice things in 2011 and beyond like when you go to a menu item with an option you can just press "Shift" to open the options (rather than having to click the options box), and this works in the Hotbox and the menu bar (not marking menus though, which is fine since it would really mess with the way they behave), as opposed to just working in the menu bar and requiring I think "Shift+Alt" like it did in Maya 2010 and previous releases. That's just one improvement, but there are others, such as the floating color chooser allowing you to keep the window open (say on a "transparency" attribute) but then choose a different material and the related channel ("transparency") gets automatically loaded into the floating color chooser, I think that's neat.
These are little things really, but just wanted to say that overall I'm relatively satisfied with Maya's development and I hope the team continues to keep the interface and user experience consistent, regardless of how much Maya changes and improves in future releases. The foundation (interface) matters a lot, and I think they've done a really good job updating it.
Cory Mogk
Posted 10 April 2012 7:03 am
We watch the stability of Maya closely. Through data like CER and CIP we know that Maya is doing a good job. If you're seeing issues, please let us know by contributing CER and CIP data, by logging bugs (through Maya's help men) and by getting in touch with the Support team (check out their Maya Station blog: mayastation.typepad.com/). Some problems are not easy to reproduce so the more info we get the better. If we get duplicate reports it helps us to prioritize them.
This past fall we received some detailed info from customers that helped us isolate some problems and we released SP2. It takes extra effort to do an SP and takes away time from developing other features but we felt it was the right thing to do for our customers.
We do have some dependencies when it comes to fixing bugs: operating systems, hardware (particularly graphics cards) and some components of Maya like Qt. This means some issues can take longer to fix than others.
If you're having an issue, it's always good to check the hardware certifications list (see System Requirements on Autodesk.com/Maya). There are a lot of system combos and we try to test the most common ones.
el.mustafa
Posted 10 April 2012 10:14 am
As you said, only in 2012 the UI is beginning to load in decent time.
I'm also grateful for the effort put into fixing this.
Just not sure why our expectations of Maya have become so low.
We're excited with a physics engine you could get for free anyway with Blender,
two extra file formats to distract our workflow and a slider for the extrude tool.
It's a pity.
fghajhe
Posted 11 April 2012 5:38 pm
Emmanuel31
Posted 12 April 2012 6:56 am
with error on reflexion and alpha map , on 2013 , yes all is good no error , and rendering time for the same scene 16 mn !!!!
good job for the develpers.
1 work on mac pro 12 core with radeon , and all seems good with maya 2013 , i hope now that i don't have to pay for correcting bugs ,but paying for innovation !!!! that the normal way no ????? for that i hope in few month we 'll have a sp1 or sp2 with iray (working on mac :) and new modeling tool with a real symétrie tool and new retopologie
And again many thanks for the developpers
Danyl
Posted 15 April 2012 3:39 am
Currently Blender have a multi res sculpting (Maya haven't), and retopologie tools (Maya have nothing for that).
Blender 2.63 :
Blender package :
And for those who are waiting for iRay, GPU rendering in Blender with GPU compositing (based on OpenCL) :
And for FREE !
I have made a table with modeling shortcuts of 3D programs :
I hope to see iRay in the 2013.5 version of Maya like 3ds max 2011.5 (mid-build) and powerful modeling, sculpting, retopology tools, painting, advanced paint effects renderable with all engines, GPU based node editor/compositing, GPGPU simulation for Nucleus. Good previsualisation of shaders, huge shader base and preset compatible with iRay, interactive render in the viewport, post prod effect (adjust image, crop, filters). Powerful compositing tools inside Maya. More user friendly Maya hair like FiberMesh with preset (grass, etc) and hair style tools.
Cory have you seen UV Master ? It allow to do a perfect UV Mapping with controled seams of an entire character in less than 10 secs :
illincrux
Posted 15 April 2012 12:15 pm
I'll still use maya for rigging and animations, but the modeling tools are just not going anywhere fast enough...
Danyl
Posted 17 April 2012 4:48 am
Mental Ray need many tweaks to have a good lighting and no artifacts and it's way too slow compared to cycle.
---
Blender and Modo growing much faster than Maya. We must not forget the incoming ZBrush 5 this year who probably integrate the most powerful and innovative modeling tools in the world (i am serious
ZBrush hard surface :
For ZBrush 5 they will have maybe tools for architectural modeling, car modeling maybe (it is missing in zbrush). A procedural texture generator who create real reliefs, etc. The render engine become fast and realist. HDRI based lighting with lightcap is more efficient than others softwares because he generate quickly light source from an image (less ressources consumption but maybe a little bit less realistic) and can edit after individual light. UV Mapping in few seconds, decimation tool, PaintStop for drawing (efficient as photoshop). Maybe they will improved animation section. Integrate hair and cloth simulation. It is difficil to say what they will have on ZBrush 5, because the software is too awesome and innovate so much (create the buzz in each new release, even minors).
For maya I'm never hurried to try a new version because of small updates who look like Service Pack. The software make the buzz too... but for the poor new features and amount of bad reactions we can found on the web!
rooftop
Posted 17 April 2012 7:17 am
Hmmm, Blender indeed. For the little while I've been using Blender, I haven't really missed Maya's marking menu (as much as I do love it).
After looking at all the positives, Maya is still lacking in modelling and UV tools. For instance if I want to add more than 10 edge loops in a given situation, it's not quick and easy. And selecting every Nth edge is still just a bonus tool? Can these tools not be updated and integrated as out of the box?
xixac
Posted 18 April 2012 12:07 pm
Totally agree - Effectively Bonus Tools and LT are available for about 3 months a year. Winter when they get updated until spring with the new release of Maya. Problem is that some tools like the uv tools are quite indispensable to make working in Maya bearable. However the engineer working on Maya apparently has nothing to do with these and they are released as kind of a favor by third parties. SO - you have the option of not integrating them into your workflow, or only update nine months after major version releases.
mckarp
Posted 3 July 2012 2:21 am
gotanidea
Posted 23 September 2012 8:14 am
erpy
Posted 2 July 2013 12:09 am
I don't believe the "original Maya core" is bugged, I do suppose Autodesk tried to integrate new stuff deeply into the Maya core and broke something here and there... that is actually possible.
Remember Maya was the first ever commercial product that's entirely "node based", 15 years ago!
It is true that not exposing features that *others* (users) are exposing (Iray interface) is a scandal. Call it "experimental" feature, but do it for the sake of it!
Guys, the "choice" node is still buggy... it's been there since ages! The "multiple" connection bug on the "plus-minus-average" is STILL THERE... oh PLEASE! Do you have another excuse for these ?
Viewport 2.0 !? You don't integrate "Iray" because you don't want to do a shallow job on it, and then you release ever-crashing, deeply-bugged Viewport 2.0 !?
You would have been better off integrating Iray first, instead of releasing the first, sad version of VP2.0.
Hell, make a real-time material editor for it! Give it an integrated shaders development environment Like Nvidia had something like 8 years ago! WAKE UP for F.SAKE!! It's 2014 already!
People telling you to leverage your big partners ARE RIGHT!
Ask Nvidia... come to terms, have THEM develop VP2.0... it's THEIR JOB!
I'd have more to say... but let's call it a day.
erpy
Posted 2 July 2013 12:19 am
"You may not batch render from Camera Sequencer". WHAT !?
Ok, let's say you'd need to "time remap" the scene...which would be feasable for a "normal dev team", but let's move on.
(Oh wait, you might create a Timewarp curve and do it automatically!... naa, too much of a quick implementation... let's try something a tad more absurd...)
Create a Ubercam! Uh, good idea... BUT, you cannot "stretch" clips... or it won't be created. WHAT !? WTF should I use it for then !?
"Camera Sequencer is good to PREVIEW your cameras and direction"... yeah right...and then ?! How DO I MAKE FRAMES FOR IT !?
"Well you can export it to..." ...uh, say it, please! Let me export the editing in at least 3 or 4 compositing software...
export in..."FINAL CUT"!! WHAT !?
And AfterEffects !? And Nuke !? And Fusion !? Premiere, at least PREMIERE!!!
Oh cmon... you're short-circuiting yourselves on the new features guys. Admit it and restart anew.
I cannot think of a better "core idea" behind a 3D software than Maya.
But you're treating it like sh*t...and it doesn't deserve this.
erpy
Posted 2 July 2013 12:28 am
LBrush it's simply a Set Driven Keyed Blend Shape well (differently) implemented.
Although I'm not in the Maya team, I'm 99% sure it would take less than a week to 4 or less programmers who already know the animation code of Maya.
And it's a great shame it's not there already... like two Maya versions ago!
You must be logged in to post a comment. Login or Register here | http://area.autodesk.com/blogs/cory/announcing_maya_2013?CMP=OTC-RSSMNE01blog | CC-MAIN-2015-35 | refinedweb | 13,283 | 68.5 |
connection closing October 25, 2010 at 6:21 PM
hello, what happens if connection is not closed? ... View Questions/Answers
jdbc driver October 25, 2010 at 6:20 PM
hello, can we create a own jdbc driver? how can we create? ... View Questions/Answers
Fastest type of JDBC Driver October 25, 2010 at 6:17 PM
hello, What is the fastest type of JDBC driver? ... View Questions/Answers
how to conduct the test in java October 25, 2010 at 6:12 PM
how to conduct the test in java ... View Questions/Answers
Connection pooling October 25, 2010 at 5:48 PM
hii, What is Connection pooling? ... View Questions/Answers
DriverManage October 25, 2010 at 5:45 PM
hello, What is DriverManager ? ... View Questions/Answers
Stored procedures October 25, 2010 at 5:38 PM
hello What are stored procedures? ... View Questions/Answers
What is JDBC? October 25, 2010 at 5:36 PM
hello, What is JDBC? ... View Questions/Answers
POP UP WINDOW October 25, 2010 at 5:07 PM
Dear Sir, Can you please help in providing the methos for opening up a pop up window just after selecting an option from a drop down.I have three options in dropdown.If i select the option 'Replacement',then a pop up window should open with name,designation and roll number.Please help me Regar... View Questions/Answers
JSP Compilation October 25, 2010 at 4:21 PM
Explain how a JSP is compiled into servlets by the container?. ... View Questions/Answers
JSP Scriptlet October 25, 2010 at 3:56 PM
Explain JSP Scriptlet and give the syntax of this Scriplet. ... View Questions/Answers
JMS QUEUE October 25, 2010 at 3:51 PM
how to create queue and queueconnectionfactory in websphere application server? ... View Questions/Answers
Ajax technology October 25, 2010 at 3:49 PM
hii, What is ajax ?? ... View Questions/Answers
JSP Declaration October 25, 2010 at 3:40 PM
What is a JSP Declaration?. Explain it. ... View Questions/Answers
HTTP GET or POST for my AJAX call October 25, 2010 at 3:35 PM
hello, Should I use an HTTP GET or POST for my AJAX calls? ... View Questions/Answers
Ajax type October 25, 2010 at 3:33 PM
hiii, Is Ajax a technology platform or is it an architectural style? ... View Questions/Answers
What is Dojo? October 25, 2010 at 3:30 PM
hiii, What is Dojo? ... View Questions/Answers
Implement the Serializable Interface October 25, 2010 at 3:18 PM
hii How many methods do u implement if implement the Serializable Interface? ... View Questions/Answers
Java Beans October 25, 2010 at 3:16 PM
hii What is Java Beans? ... View Questions/Answers
JSP Implicit Objects October 25, 2010 at 3:05 PM
What are implicit objects in JSP? and provide List them? Thanks in advance ... View Questions/Answers
Super class of an Exception class October 25, 2010 at 3:02 PM
hello,,, What is super class of an Exception class? ... View Questions/Answers
JSP Scripting Elements October 25, 2010 at 2:53 PM
Explain the jsp scripting elements. ... View Questions/Answers
Serialize the static variable October 25, 2010 at 2:52 PM
hello, Can we serialize the static variable? ... View Questions/Answers
Functionality of the stub October 25, 2010 at 2:46 PM
hii,, What is the functionality of the stub? ... View Questions/Answers
Clipping October 25, 2010 at 2:43 PM
hii, What is clipping? ... View Questions/Answers
JSP Taglib Directive using process October 25, 2010 at 2:36 PM
How is Taglib Directive used in JSP? ... View Questions/Answers
overloading and overriding October 25, 2010 at 2:35 PM
hello, What is the difference between overloading and overriding? ... View Questions/Answers
struts October 25, 2010 at 2:31 PM
how to start struts? ... View Questions/Answers
JSP tag lib directive October 25, 2010 at 2:28 PM
What is tag lib directive in the JSP? ... View Questions/Answers
Read RFID data October 25, 2010 at 1:26 PM
how to read RFID data using java? ... View Questions/Answers
JSP include directive tag syntax and example October 25, 2010 at 1:05 PM
The syntax and example of the JSP include directive tag. ... View Questions/Answers
Set interface October 25, 2010 at 12:54 PM
hello,, What is the Set interface? ... View Questions/Answers
JSP include directive tag October 25, 2010 at 12:53 PM
What is include directive tag in JSP? ... View Questions/Answers
Thread restart October 25, 2010 at 12:51 PM
hello,, can dead thread restart? ... View Questions/Answers
JSP include directive tag October 25, 2010 at 12:50 PM
What is include directive tag in JSP? ... View Questions/Answers
Phantom memory October 25, 2010 at 12:49 PM
hello,, What is phantom memory?? ... View Questions/Answers
Collections API October 25, 2010 at 12:46 PM
hello, What is the Collections API? ... View Questions/Answers
Dictionary class October 25, 2010 at 12:41 PM
hello,, What is the Dictionary class? ... View Questions/Answers
examples October 25, 2010 at 12:40 PM
Hi sir...... please send me the some of the examples on jdbc connection of mysql database in jsp. thanks for sending me the previews questions . ... View Questions/Answers
JSP page directive tag atributes October 25, 2010 at 12:38 PM
The list of the page directive tag attributes in the JSP. ... View Questions/Answers
lock on a class October 25, 2010 at 12:35 PM
hello, Can a lock be acquired on a class? ... View Questions/Answers
ArrayList and Vector October 25, 2010 at 12:33 PM
hello, Why ArrayList is faster than Vector? ... View Questions/Answers
Serializalble and Externalizable October 25, 2010 at 12:31 PM
hello, What is the difference between Serializalble and Externalizable interface? ... View Questions/Answers
concat and append October 25, 2010 at 12:26 PM
hello, What is the difference between concat and append? ... View Questions/Answers
JSP page directive tag syntax October 25, 2010 at 12:26 PM
Descibe the syntax of the page directive with example In JSP. ... View Questions/Answers
disadvantage of threads October 25, 2010 at 12:20 PM
hello, Can somebody tell me What is the disadvantage of threads? ... View Questions/Answers
implicit objects in jsp October 25, 2010 at 12:13 PM
hello, how many implicit objects in jsp??? ... View Questions/Answers
platfrom independent October 25, 2010 at 12:09 PM
hii, what is platfrom independent?? ... View Questions/Answers
multiple inheritance. October 25, 2010 at 12:07 PM
hello, can java support multiple inheritance??? ... View Questions/Answers
Java is case sensitive October 25, 2010 at 12:04 PM
hello, Why Java is case sensitive? ... View Questions/Answers
JSP page directive tag October 25, 2010 at 12:03 PM
What is page directive tag in JSP?. ... View Questions/Answers
yield and sleep October 25, 2010 at 12:00 PM
hello, What is the difference between yield() and sleep()? ... View Questions/Answers
JCombo Box problem October 25, 2010 at 11:59 AM
I have three combo boxes First combo box display the year Second combo box display the month for the selected year. Third combo box display number of week in a selected month for the year. I am select year, month and the third combo box display the number of week. ... View Questions/Answers
transient variables in java October 25, 2010 at 11:57 AM
hello, What are transient variables in java? ... View Questions/Answers
Type of JSP Directive Tag October 25, 2010 at 11:56 AM
How many types of directive tag in the JSP? ... View Questions/Answers
package October 25, 2010 at 11:54 AM
hello, What is a package? ... View Questions/Answers
Reflection October 25, 2010 at 11:49 AM
hello, What is reflection? ... View Questions/Answers
java October 25, 2010 at 11:45 AM
sir, please send me the some of the examples in jsp-servlet.used to connect the database by using the mysql ... View Questions/Answers
Read Video File October 25, 2010 at 11:45 AM
how to read a video file, after that i want to encrypt and decrypt it. please help me and if u can send me some hint or source code on [email protected] Thanks & Regards Swarit Agarwal ... View Questions/Answers
daemon thread October 25, 2010 at 11:42 AM
hello, What is a daemon thread? ... View Questions/Answers
JSP Directive Tag October 25, 2010 at 11:39 AM
What is JSP Directive tag? ... View Questions/Answers
java API October 25, 2010 at 11:36 AM
hello What is the Java API? ... View Questions/Answers
SimpleTimeZone October 25, 2010 at 11:30 AM
hello, What is the SimpleTimeZone class? ... View Questions/Answers
program code for login page in struts by using eclipse October 25, 2010 at 11:29 AM
I want program code for login page in struts by using eclipse ... View Questions/Answers
GregorianCalendar class October 25, 2010 at 11:28 AM
hello,, What is the GregorianCalendar class? ... View Questions/Answers
while and do while October 25, 2010 at 11:23 AM
hello, What is the difference between a while statement and a do statement? ... View Questions/Answers
Print a statement October 25, 2010 at 11:18 AM
hello what would we output for this statement System.out.println ("5"+"A" + 3); ... View Questions/Answers
Modifiers are allowed in interface October 25, 2010 at 11:15 AM
hello, What modifiers are allowed for methods in an Interface? ... View Questions/Answers
Use of jQuery October 25, 2010 at 11:13 AM
What is the use of jQuery? ... View Questions/Answers
this and super October 25, 2010 at 11:11 AM
hello,, why are this() and super() used ? ... View Questions/Answers
finally block October 25, 2010 at 11:08 AM
hii, If I am writing return at the end of the try block and some code in finally block, then the finally block will execute?? ... View Questions/Answers
JSP October 25, 2010 at 11:02 AM
What is JSP? ... View Questions/Answers
try and finally block October 25, 2010 at 11:02 AM
hello, If I write System.exit (0); at the end of the try block, will the finally block still execute? ... View Questions/Answers
Serialization October 25, 2010 at 10:57 AM
hello What is serialization? ... View Questions/Answers
main method October 25, 2010 at 10:53 AM
hello, Can I make multiple main methods in the same class? If no then what will happen in compilation time? ... View Questions/Answers
Please clarify my doubt October 25, 2010 at 10:52 AM
/here is my sample code for deadlock/ class A { synchronized void foo(B b) { String name = Thread.currentThread().getName(); System.out.println(name + "entered A.foo()"); /*try { //Thread.sleep(1000); } catch(Int... View Questions/Answers
Final Keyword October 25, 2010 at 10:46 AM
hello, What is final? ... View Questions/Answers
Interface and Abstract class October 25, 2010 at 10:40 AM
hello,, Can some body tell me what is the difference between an Interface and an Abstract class? ...
Agile methods October 25, 2010 at 10:33 AM
Why use Agile methods? ... View Questions/Answers
regarding the pdf table using itext October 25, 2010 at 10:17 AM
if table exceeds the maximum width of the page how to manage it ... View Questions/Answers
Make A website October 25, 2010 at 4:25 AM docume... View Questions/Answers
Changing Executable Jar File Icon October 24, 2010 at 8:51 PM
I have created an executable jar file for my java program and the icon that appears is the java icon. I will like to know if there is a way to change this to any icon of my choice. ... View Questions/Answers
Use javascript loops.. October 24, 2010 at 7:18 PM
Write a JavaScript code to find a number of unique letters in string. (Eg. if keyword is Tajmahal, Tajmahal count will be '5' , it only takes these letters T,j,m,h,l , not taken the letter a because it is repeated) , means it doesn't count the repeated letters. ... View Questions/Answers
program in array October 24, 2010 at 4:14 PM
print("code sample");write a program that initializes an array with ten random integers and then prints four lines of output,containing:every element at an even index,every even element,all elements in reverse order,and only the first and last element. ... View Questions/Answers
program of array October 24, 2010 at 4:01 PM
write a program that initializes an array with ten random integers and then prints four lines of output,containing:every element at an even index,every even element,all elements in reverse order,and only the first and last element. ... View Questions/Answers
Array October 24, 2010 at 3:55 PM
Hi, Here is my code: public class Helloworld { public static void main (String [] args) { System.out.println("Hello,World"); } } Thanks. ... View Questions/Answers
Java Bean Properties October 24, 2010 at 1:12 PM
What are the properties of a normal java Bean(Not EJB) ... View Questions/Answers
heap and stack in general programming language October 24, 2010 at 1:10 PM
what's the difference between heap and stack ? ... View Questions/Answers
Use javascript loops.. October 24, 2010 at 11:01 AM
Write a Javascript code to create a redirection script based on day of the week. ...
KEY EVENT HANDLING October 24, 2010 at 12:23 AM
I am trying to write a program that receives every key stroke even when the window is not active or it's minimized. Is there anyway to do this ... View Questions/Answers
jdbc warning regarding to ms access October 23, 2010 at 10:33 PM
shows warning msg while compiling using ms access : warning: sun.jdbc.odbc.JdbcOdbcDriver is Sun proprietary API and may be removed in future release. here is my code import java.sql.*; class AccessData { public static void main(String args[])throws Exception { DriverM... View Questions/Answers
Class and object October 23, 2010 at 7:57 PM
what is exact meaning for the statement, A a1=new B(); ... View Questions/Answers
begineer October 23, 2010 at 6:05 PM val... View Questions/Answers
configure mail sever October 23, 2010 at 5:11 PM
ok James is a best server for javamail but how to configure and run program if you have any program pls send me at [email protected] ... View Questions/Answers | http://www.roseindia.net/answers/questions/203 | CC-MAIN-2017-04 | refinedweb | 2,393 | 53.51 |
This tutorial shows you how to use the NetBeans IDE.
This tutorial uses concepts introduced in more basic tutorials. If you do not have basic knowledge of the IDE and its design components, consider first reading introductory tutorials such as Getting Started with Visual Web JSF Application Development and Using Databound Components to Access a Database.
Note: This document uses the NetBeans IDE 6.0 and 6.1 Releases. If you
are using NetBeans IDE 6.5, see Performing Inserts, Updates, and Deletes.
Expected duration: 45 minutes, which includes a person and corresponding trips,.
InsertUpdate
Note: Creating a project in NetBeans 6.1 includes new options which can be left at the default. For example, the Use Dedicated Folder for Storing Libraries checkbox may be left unselected.
id
personDD
Drag a Message Group component from the Woodstock Basic Palette category and place it to the right of the Drop Down List.
info
error
Open the Services window, expand the Databases node, connect to the Travel database.
travel
derbyClient.jar
<tomcat_install>/common/lib
Expand the jdbc node for the TRAVEL database, then expand the Tables node.
Drag the PERSON node onto the Drop Down List in the Visual Designer.
Right-click the Drop Down List and choose Auto-Submit on Change from the pop-up menu.
Right-click the Drop Down List and choose Configure Virtual Forms from the pop-up menu.
Click New and type person in the Name column. Double-click the field under the Participate column and set it to Yes, and then do the same for the Submit column, as shown in the following figure.
person
Yes
Click the Show Virtual Forms button in the Visual Designer toolbar, as shown in the figure below.
By viewing virtual forms, you can see the relationship between components in the Visual Designer and any virtual forms that you have configured.
Drag the Travel > Tables > TRIP node from the Services window and drop it on the Table component in the Visual Designer.
Right-click the Table and choose Table Layout from the pop-up menu.
Use the < button to remove TRIP.TRIPID, TRIP.PERSONID, and TRIP.LASTUPDATED from the Selected list on the right, as shown in the following figure.
Trips
Summary
Your Table component in the Visual Designer should now look as it does the following figure. Note that if your columns are not in the order shown, you can rearrange them by reopening the Table Layout dialog box, clicking the Columns tab, and using the Up and Down buttons.
In the Navigator window, right-click tripRowSet under SessionBean1 and choose Edit SQL Statement from the pop-up menu.
The SQL Query Editor opens.
In the grid area near the center of the window, right-click in the PERSONID row and choose Add Query Criteria, as shown in the following figure.
In the Add Query Criteria dialog box, set the Comparison drop-down list to =Equals and select the Parameter radio button, and then click OK.
=Equals
You now change the column contents to be editable fields in preparation for adding the ability to insert new trips into the database. When you do so, you take advantage of the compound nature of the Table component by nesting other components inside it.
Right-click the Table component and choose Table Layout.
In the Columns tab, select TRIP.DEPDATE from the Selected list on the right. In the Column Details area at the bottom of the dialog box, change the Component Type from Static Text to Text Field, as shown in the following figure, and click Apply.
Drag the Travel > Tables > TRIPTYPE node onto the Drop Down List in the Table component. If the Choose Target dialog opens, select dropDown1 and click OK.
triptypeDataProvider.
Change the name of the new virtual form to save and the Participate setting to Yes, as shown in the following figure, and then click OK.
save
You now associate the personDD Drop Down List with the Table component to enable the following behavior: When the user selects a person from the list, that person's trips will appear in the table.
In personDD_processValueChange method, add the bold text in Code Sample 1, and then press Alt-Shift-F to reformat your code..
try
form1.discardSubmittedValues("save")
Note also that the event handler does not throw exceptions. Instead, it logs them in the server.log file. The event handler also calls an error method that, in the event of an error, displays a message in the Message Group component.
server.log
Scroll in the Java source to the prerender method, or, if you prefer, type Ctrl-F and search for prerender. Add the following code in bold to the method.
prerender);
}
}
}
Build, deploy, and run the project by clicking the Run Main Project button on the main toolbar. When the page loads into your web browser, the drop-down list is populated with names, and the table is filled with data. When you select a different name from the list, the trips associated with that name appear in the table.
In this section, you add a feature that makes it possible to add a trip to the table by inserting a rowset into the database. First, you provide Message components for the Table's Text Fields. These components ensure that the user sees errors when entering incorrect information. Then you add a Button to the page that enables users to add new rows to the data buffer.
for
textField1
textField2
Set the for property of the third Message component to textField3.
textField3
Make sure that your application looks like the figure below.
Note: There is a known issue that affects the width of the JSF 1.2 Button component in IE7. The workaround is to place the Button component in a layout component (Grid Panel, Group Panel, or Layout Panel). Resizing the layout component automatically resizes the Button component.
Button
Add
Trip
add
add_action
Add the following code shown in bold to the button's event handler method:
public String add_action() {
try {
RowKey rk = tripDataProvider.appendRow();
tripDataProvider.setCursorRow(rk);
tripDataProvider.setValue("TRIP.TRIPID", new Integer(0));
tripDataProvider.setValue("TRIP.PERSONID", personDD.getSelected());
tripDataProvider.setValue("TRIP.TRIPTYPEID", new Integer(1));
} catch (Exception ex) {
log("Error Description", ex);
error(ex.getMessage());
}
return null;
}
Right-click in the Java Editor and choose Fix Imports to resolve the RowKey not found error.
RowKey
The IDE adds the following package to the Page1.java block of import statements:
Page1.java
import com.sun.data.provider.RowKey;
Build, deploy, and run the project by clicking the Run Main Project button . The page loads into your web browser, and the Add Trip button appears, as shown in the following figure. Each time you click the button, a new empty row is appended to the bottom of the table. You are able to edit the information in the row, but because you have not yet provided a mechanism for saving the rowset, your changes will be lost when you choose a different name from the drop-down list.
In this section, you add a second rowset to the project. The rowset is used to calculate the maximum trip ID that has been used.
Open the Services window, select the Databases > Travel > Tables > TRIP table, and drag it onto the SessionBean1 node in the Navigator window.
In the Add New Data Provider dialog, select the Create SessionBean1/tripRowSet1 radio button, change the data provider name to maxTripRowSet, and click OK.
maxTripRowSet
Note: In NetBeans 6.0, rowsets may appear twice in the dialog box. This is a known issue and should be ignored. It does not affect the application in this tutorial.
SELECT MAX(TRAVEL.TRIP.TRIPID)+1 AS MAXTRIPID FROM TRAVEL.TRIP
MAXTRIPID
Close the Query Editor.
Note: This query is not supported by the Query Editor's graphical editor. If you see an alert dialog box complaining of a lexical error, you can safely dismiss it by clicking Continue.
Save
Changes
save.
save_action
Add the following code shown in bold to the button's event handler method:
public String save_action() {
try {
// Get the next key, using result of query on MaxTrip data provider
CachedRowSetDataProvider maxTrip = getSessionBean1().getMaxTripDataProvider();
maxTrip.refresh();
maxTrip.cursorFirst();
int newTripId = ((Integer) maxTrip.getValue("MAXTRIPID"));
// Navigate through rows with data provider
if (tripDataProvider.getRowCount() > 0) {
tripDataProvider.cursorFirst();
do {
if (tripDataProvider.getValue("TRIP.TRIPID").equals
(new Integer(0))) {
tripDataProvider.setValue("TRIP.TRIPID",
new Integer(newTripId));
newTripId++;
}
} while (tripDataProvider.cursorNext());
}
tripDataProvider.commitChanges();
} catch (Exception ex) {
log("Error Description", ex);
error("Error :"+ex.getMessage());
}
return null;
}
Build, deploy, and run the project by clicking the Run Main Project button. The application functions as follows:
In this section, you add a delete feature to the table. Using this feature, users will be able to delete a trip by removing a row from the database. As implemented in this tutorial, the action of the Delete button is immediate and does not require the Save Changes button to delete the row from the database. In fact, because the Delete button event handler uses the commitChanges method, it also saves all pending changes just as the Save Changes button does.
commitChanges
Click Design in the editor window to return to Page1 in the Visual Designer, and then right-click the Trips Summary table and choose Table Layout from the pop-up menu.
With the new column name selected in the Selected list, make the following changes in the Column Details area:
Center
Middle
delete
delete_action
public String delete_action() {
form1.discardSubmittedValues("save");
try {
RowKey rk = tableRowGroup1.getRowKey();
if (rk != null) {
tripDataProvider.removeRow(rk);
tripDataProvider.commitChanges();
tripDataProvider.refresh();}
} catch (Exception ex) {
log("ErrorDescription", ex);
error(ex.getMessage());
}
return null;
}
Build, deploy, and run the project by clicking the Run Main Project button. The following figure shows the running application.
When deployed, you should be able to delete a row from the table to remove it from the database. The delete action will also commit all pending changes to the database.
Now, add a revert feature to the page. Using this feature, users will be able to abandon their edits and revert to the previously saved data. Note that the revert feature will not bring back saved or deleted rows; both the Save Changes and Delete buttons commit changes to the database.
Revert
revert
revert_action
Add the code in bold in the following code sample to the revert_action method.
public String revert_action() {
form1.discardSubmittedValues("save");
try {
tripDataProvider.refresh();
} catch (Exception ex) {
log("Error Description", ex);
error(ex.getMessage());
}
return null;
}
The application as presently configured exhibits some undesirable behavior. For example, if the user enters an invalid date in the first column of an existing row and then clicks the Add button, the operation fails because a conversion error on the date rejects the form submission. The desired behavior when the user clicks the Add button is to forego processing the input fields in the table so that a new row can be added regardless of pending edits to existing rows.
Similarly, when the user clicks the Revert button, the intention is to abandon all edits, so edits should also be ignored in that case. However, when the user clicks the Delete button, you still want validation to happen because this button not only deletes a row, it also submits any pending changes, requiring that input fields be processed first.
To ensure that the input fields on the page forego processing (including validation checks) when the user clicks the Add or Revert button, you will make these buttons submit a virtual form. You can make both buttons submit the same virtual form because they need to submit a virtual form that has no participants.
In the Visual Designer, Ctrl-Click to select the Add, Revert, and Delete buttons, and then right-click and choose Configure Virtual Forms from the pop-up menu.
In the Configure Virtual Forms window, click New, name the new virtual form add/revert/delete, and set Submit to Yes. Click OK.
add/revert/delete
Build, deploy, and run the project by clicking the Run Main Project button . The figure below shows the running application.
When deployed, you are able to perform the following functions:
Abandon your edits and revert to the most recently saved data from the database.
In this tutorial, you associated a Table component, Text Field components, and Drop Down List components with information in a database. You set properties on components and added prerender and event code to insert, update, and delete data from the database and revert changes entered on the form. You used virtual forms, which allowed your application to use just a single page and allowed submitted data to bypass validation checks when adding a row or reverting changes.
Bookmark this page | http://www.netbeans.org/kb/60/web/inserts-updates-deletes.html | crawl-002 | refinedweb | 2,119 | 55.84 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » For command
i wanted to use the for() command to increment the variable
int i;
for(i=0; i<=59; i++) {
}
I want the variable "i" to increment with a particular time interval.
how do i do it?
The best answer really depends on why you want to do that. I could tell you to do
for (i = 0; i < 60; i++)
{
delay_ms(1000);
}
but that might not be the best way, depending on your application.
Well here is a unqualified (non programmer) answer. I am sure others will do better but here is a literal answer:
for(i=0; i<=59; i++)
{
delay_ms(500);
LCD write i;
}
I believe that will work, of course the LCD write is sudo.
Others will jump all over me if I am incorrect.
Ralph
i want to do this so i can show time on my lcd
will this code work for a clock?
#define F_CPU 14745600
#include <stdio.h>
#define F_CPU 14745600
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
#include "../libnerdkits/delay.h"
#include "../libnerdkits/lcd.h"
int main(){
// variable seconds
int8_t s;
// variable Minutes
int8_t m;
// variable hrs
int8_t h;
while (1){
// added 1 since it will count from 0,1,2....
// I want it to count from 1,2,3.....60
for 1+(s = 0; s < 59; s++) {
// delaying for 1000 milli sec
delay_ms(1000);
}
//same here
for 1+(m = 0; m < 59; m++) {
delay_ms (60000) ;
}
for 1+(h = 0; h < 24; h++) {
delay_ms (3600000);
}
//send it to lcd
printf_p(PSTR("time is %d : %d : %d ", h, m, s));
}
return 0;
}
Hi Hari,
Ralph gave you some great pointers up above, I suggest you take a close look at what he posted for you. The place where you are adding 1 does not really make sense. If you want a for loop to start from 1, simply make it start there by using the initialization parameter. The loop
int i;
for(i=1;i<100;i++){
// do something
}
would start at 1 and keep looping while i is less than 100.
Your code would also not do what you really expect it to because of the way you laid out the for loops. Your first for loop will actually delay you one second, but then the next one will wait for full minute before the loop exists, and the next one would wait a full hour, the last one would not even really work because you would overflow the capacity of the delay function. It is a very good attempt at coding, and you should definitely take a second to understand why the way you laid out the flow does not work.
I highly recommend you take a look at our crystal real time clock tutorial for an idea of how we kept track of "real time"
Humberto
Thanks Humberto!
But i do not understand the code written forn the crystal time clock
I do not understand how u set up the interrupt.
hariharan, start a new thread "How do you setup a interrupt". We have not had a detailed discussion on interrupts for a while (there are lots of discussions in the forum). It might be good to get a consolidated starting from scratch discussion going.
Start with what you know, do some searching and make a attempt then post what you have.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1439/ | CC-MAIN-2018-09 | refinedweb | 587 | 78.79 |
Does anyone know how to convert a Y axis rotation into a Z axis rotation. I'm trying to translate the center Y spin of a plane to the center Z spin of another plane, like taking a horizontally spinning wheel and turning it vertical. I think this can be done using the euler values, but I can't seem to get it right. Thanks
Answer by outerringz
·
Feb 04, 2013 at 10:17 AM
I finally figured it out, here's the solution. Copy the Y+90 to X and set static 270 to Y and 90 to Z.
transform.eulerAngles = new Vector3(sourceObject.transform.eulerAngles.y + 90, 270f, 90f);
Answer by robertbu
·
Feb 04, 2013 at 07:15 AM
Here is an example:
public class LockStep : MonoBehaviour {
public GameObject goMatchRotation;
void Update () {
Vector3 v3T = goMatchRotation.transform.localEulerAngles;
float fT = v3T.z;
v3T.z = v3T.y;
v3T.y = fT;
transform.localEulerAngles = v3T;
}
}
Thanks for your response, but I already tried that. It tumbles the plane end over end like flipping a coin ins$$anonymous$$d of spinning it like a wheel.
I didn't read your question carefully enough. The reference to Z threw me a bit since planes don't have a Z axis. If I'd understood the problem I would have suggested making the plane a child of an empty game object and assign the Y rotation of the source object to the Z of the empty game object. I'll bet there was a lot of trial and error work to find your formula.
There was a lot of slow manual iterations through the spin to see the value changes and understand what was actually happening, but now it works and I can move on to the next item in the list..
RotateAround using eulerAngles
1
Answer
My Obstacle Rotations Are Not Correct
0
Answers
Head and body rotate threshold
1
Answer
Rotating a rigidbody on mouse click.
1
Answer
Camera viewport transformation from one world to the rotated world.
1
Answer | https://answers.unity.com/questions/393334/convert-y-axis-rotation-to-z-axis-rotation.html?sort=oldest | CC-MAIN-2020-34 | refinedweb | 337 | 62.27 |
seithRename
DOWNLOAD:
Updated on 18 September 2010: v1.0
– First release.
(Maya 2008, 2009, 2010 and 2011)
INFO:
I know there are already several renaming scripts for Maya out there, but each time I tried one out I found its interface (or overall method) to be unintuitive at best. I just hated having to stop and think twice about how to get what I needed, so I made this tool. It’s written in Python and it’s so simple it can be used by idiots. See, I use it all the time! 😛
Features:
- Rename anything by anything.
- Add a prefix or suffix.
- Act on just the selection or anything within the hierarchy.
- Precise feedback on what happened during the process.
Installation:
First, close Maya. Then put seithRename.py within your “My Documents\maya\20xx\scripts” folder (or “C:\Program Files\Autodesk\Maya20xx\Python\lib\site-packages“). Finally, start Maya and make a MEL button with the following command:
python(“import seithRename as seithRename”);
python(“seithRename.seithRename()”);
This will launch the interface and then you’re on.
Hi, Seith! Great job! I am using maya 2013 now and some problem with executing this script tool. Maya told me that syntax error :# Error: invalid syntax #
import seithRename as seithRename! Please reply if you c this. THanks!!
Mmh. Are you sure you’re in a Python tab and not a MEL tab when you type this?
Definitely! Thanks for replying. I copied the two lines to Python window, and it told me that. I tried twice by copying the .py file to both documents path and program path.@Seith
Definitely! Thanks for replying. I copied the two lines to Python window, and it told me that. I tried twice by copying the .py file to both documents path and program path.@Seith
Hi, Seith! I uploaded the error picture here.
Hi, Seith! a friend helped me, problem solved!
she checked the usage inside the py file and found this:
python(“import seithRename as seithRename”);
python(“reload(seithRename)”);
python(“seithRename.seithRename()”);
it seemed you missed 1 line above.
FYI,
nothing wrong with Seith’s tool and his instruction, mind when click download, if you r using chrome, make sure to right click then select save as.
Great tool!
I cant download the pyscript
i like the idea anyhoo!
Hey Jason, you need to click on the download icon to download the script…
Hi Seith:
The link doesn´t work, can you fix it please? | https://seithcg.com/wordpress/?page_id=731 | CC-MAIN-2022-40 | refinedweb | 411 | 77.33 |
I’ve had some issues lately with PHP memory limits lately:
Out of memory (allocated 22544384) (tried to allocate 232 bytes)
These are quite the nuisance to debug since I’m not left with a lot of info about what caused the issue.
Adding a shutdown function has helped
register_shutdown_function('shutdown');
then, using error_get_last(); I can obtain information about the last error, in this case, the “Out of memory” fatal error, such as the line number, and the php file name.
This is nice and all, but my php program is heavily object oriented. An error deep in the stack doesn’t tell me much about the control structure or the execution stack at the moment of the error. I’ve tried debug_backtrace(), but that just shows me the stack during shutdown, not the stack at the time of the error.
I know I can just raise the memory limit using ini_set or modifying php.ini, but that doesn’t get me any closer to actually figuring out what is consuming so much memory or what my execution flow looks like during the error.
Anyone have a good methodology for debugging memory errors in advanced Object Oriented PHP programs?
echo '<pre>'; $vars = get_defined_vars(); foreach($vars as $name=>$var) { echo '<strong>' . $name . '</strong>: ' . strlen(serialize($var)) . '<br />'; } exit(); /* ... Code that triggers memory error ... */
I use this to print out a list of currently assigned variables just before a problem section of my code, along with a (very) rough estimate of the size of the variable. I go back and
unset anything that isn’t needed at and beyond the point of interest.
It’s useful when installing an extension isn’t an option.
You could modify the above code to use
memory_get_usage in a way that will give you a different estimate of the memory in a variable, not sure whether it’d be better or worse.
Answer:
Memprof is a php extension that helps finding those memory-eaters snippets, specially in object-oriented codes.
This adapted tutorial is quite useful.
Note: I unsuccessfully tried to compile this extension for windows. If you try so, be sure your php is not thread safe. To avoid some headaches I suggest you to use it under *nix environments.
Another interesting link was a slideshare describing how php handles memory. It gives you some clues about your script’s memory usage.
Answer:
I wonder is perhaps your thinking regards methodology is flawed here.
The basic answer to your question – how do I find out where this error is occurring? – has already been answered; you know what’s causing that.
However, this is one of those cases where the triggering error isn’t really the problem – certainly, that 232 byte object isn’t your problem at all. It is the 20+Megs that was allocated before it.
There have been some ideas posted which can help you track that down; you really need to look “higher level” here, at the application architecture, and not just at individual functions.
It may be that your application requires more memory to do what it does, with the user load you have. Or it may be that there are some real memory hogs that are unnecessary – but you have to know what is necessary or not to answer that question.
That basically means going line-by-line, object-by-object, profiling as needed, until you find what you seek; big memory users. Note that there might not be one or two big items… if only it were so easy! Once you find the memory-hogs, you then have to figure out if they can be optimized. If not, then you need more memory.
Answer:
Check the documentation of the function memory_get_usage() to view the memory usage in run time.
Answer:
Website “IF !1 0” provides a simple to use MemoryUsageInformation class. It is very useful for debugging memory leaks.
<?php class MemoryUsageInformation { private $real_usage; private $statistics = array(); // Memory Usage Information constructor public function __construct($real_usage = false) { $this->real_usage = $real_usage; } // Returns current memory usage with or without styling public function getCurrentMemoryUsage($with_style = true) { $mem = memory_get_usage($this->real_usage); return ($with_style) ? $this->byteFormat($mem) : $mem; } // Returns peak of memory usage public function getPeakMemoryUsage($with_style = true) { $mem = memory_get_peak_usage($this->real_usage); return ($with_style) ? $this->byteFormat($mem) : $mem; } // Set memory usage with info public function setMemoryUsage($info = '') { $this->statistics[] = array('time' => time(), 'info' => $info, 'memory_usage' => $this->getCurrentMemoryUsage()); } // Print all memory usage info and memory limit and public function printMemoryUsageInformation() { foreach ($this->statistics as $satistic) { echo "Time: " . $satistic['time'] . " | Memory Usage: " . $satistic['memory_usage'] . " | Info: " . $satistic['info']; echo "\n"; } echo "\n\n"; echo "Peak of memory usage: " . $this->getPeakMemoryUsage(); echo "\n\n"; } // Set start with default info or some custom info public function setStart($info = 'Initial Memory Usage') { $this->setMemoryUsage($info); } // Set end with default info or some custom info public function setEnd($info = 'Memory Usage at the End') { $this->setMemoryUsage($info); } // Byte formatting private function byteFormat($bytes, $unit = "", $decimals = 2) { $units = array('B' => 0, 'KB' => 1, 'MB' => 2, 'GB' => 3, 'TB' => 4, 'PB' => 5, 'EB' => 6, 'ZB' => 7, 'YB' => 8); $value = 0; if ($bytes > 0) { // Generate automatic prefix by bytes // If wrong prefix given if (!array_key_exists($unit, $units)) { $pow = floor(log($bytes) / log(1024)); $unit = array_search($pow, $units); } // Calculate byte value by prefix $value = ($bytes / pow(1024, floor($units[$unit]))); } // If decimals is not numeric or decimals is less than 0 // then set default value if (!is_numeric($decimals) || $decimals < 0) { $decimals = 2; } // Format output return sprintf('%.' . $decimals . 'f ' . $unit, $value); } } | https://exceptionshub.com/debugging-how-do-you-debug-php-out-of-memory-issues.html | CC-MAIN-2021-25 | refinedweb | 914 | 51.89 |
The Question is:
Dear Wizard,
I am porting a UNIX program for scanning logfiles, and have run
into a few snags. The first I have been told is impossible (opening
a file and fseek'ing back and forth and using fgets() to read the
data) because of limitations in the C RTL. This I have worked around. But
now that the program is completed and I am testing on "real" logfiles I find
that I am unable to access these. The fopen() function (which opens the
file read-only ) returns with a
n error message indicating that the file is locked by another process. Is
there no way of opening a file for reading from standard C (that is non-RMS
functions) if another process is using that file? I
have attempted to set flags for sharing (shr=get looked most promising) but
am getting nowhere. If this is not possible without using RMS functions
please point me to some sample code, as I have never used these functions
before and am more than a littl
e intimidated by the apparent amount of code needed to do seemingly simple
things ;-)
Thank you in advance for your sage advice!
PS: If it is relevant I am using DecC v6
The Answer is :
The fopen option:
"shr=get,upd,put"
or similar is commonly used, in conjunction with periodic calls
to the routines:
fsync(fileno(fp))
and
fflush(fp)
Note: the more calls to fsync and fflush, the slower things get.
See the documentation on the creat call for details, as well as
the RMS documentation.
One example would be:
fopen(file,"w","shr=get","ctx=rec","fop=dfw");
with readers opening the file with "shr=get".
A full example is attached below.
Also please see "C stdout and log files? message logging?" in topic
(2078) here in the Ask The Wizard area. Also see the RMS_EXAMPLES.C
module found in SRH_EXAMPLES directory on the OpenVMS Freeware V4.0
and V5.0 distributions -- this example shows native RMS calls and the
FAB and RAB structures in the context of a C application program.
--
#include <signal.h>
#include <ssdef.h>
#include <stdio.h>
#define LOOPMAX 50
#define LOOPINT 3
main()
{
FILE *fp;
char *fn = "sys$scratch:tmp.tmp";
int i;
printf( "Opening scratch file %s for shared access...\n", fn );
printf( "This application writes to the shared file...\n");
fp = fopen( fn, "w", "shr=get", "rat=cr", "rfm=var", "ctx=rec" );
for ( i = 0; i < LOOPMAX; i++ )
{
printf( "Iteration count: %d\n", i );
fprintf( fp, "Iteration count: %d\n", i );
fsync( fileno( fp ));
sleep( LOOPINT );
}
printf( "Done.\n" );
printf( "Please delete scratch file %s ...\n", fn );
return SS$_NORMAL;
}
--
#include <signal.h>
#include <ssdef.h>
#include <stdio.h>
#define LOOPMAX 50
#define TEXTMAX 100
#define LOOPINT 3
main()
{
FILE *fp;
char *fn = "sys$scratch:tmp.tmp";
char txt[TEXTMAX];
int i;
printf( "Opening scratch file %s for shared access...\n", fn );
printf( "This application reads from the shared file...\n");
fp = fopen( fn, "r", "shr=get,put,upd", "rat=cr", "rfm=var", "ctx=rec" );
for ( i = 0; i < LOOPMAX; i++ )
{
fgets( txt, TEXTMAX, fp );
printf( "Read: <%s>\n", txt );
sleep( LOOPINT );
}
printf( "Done.\n" );
return SS$_NORMAL;
} | http://h71000.www7.hp.com/wizard/wiz_2867.html | CC-MAIN-2015-11 | refinedweb | 534 | 73.68 |
Execute arbitrary JS with callbacks
Execute arbitrary JS with callbacks in node.js. Also counts asynchronous operations and does not return until all callbacks have been executed.
Note: This library actually uses
vm.runInNewContext() instead of
eval() for a bit more added security, though it doesn't fork a process, so it's best used with trusted code.
npm install async-eval npm test
or
git clone npm install npm test
var asyncEval = require('async-eval'); var someObject = {x: 5, y: 10}; function waitOneSecond(callback) { setTimeout(callback, 1000); } var options = { this: someObject, asyncFunctions: { waitOneSecond: waitOneSecond } } asyncEval('waitOneSecond(function() { this.x += 2; });', options, function() { console.log(someObject.x); // 7 });
asyncEval(code, [options], [callback])
asyncEval() will interpret and execute
code and run
callback when the code and every asynchronous function it calls has finished running.
this
Default: {}
Sets the object that will be used as
this in the executed code and any nested callbacks.
context
Default: {}
Sets the global context in the executed code. Put any synchronous DSL functions and global variables here.
asyncFunctions
Default: {}
Registers asynchronous functions into the
context. Asynchronous functions must be listed in the
asyncFunctions property so that asyncEval can count pending callbacks.
The functions registered in
asyncFunctions must take a callback as the last argument.
These functions can be namespaced with objects, for example:
asyncFunctions: { users: { get: function(callback) { /* ... */ }, create: function(user, callback) { /* ... */ }, }, posts: { get: function(callback) { /* ... */ }, create: function(post, callback) { /* ... */ }, } } | https://www.npmjs.com/package/async-eval | CC-MAIN-2016-40 | refinedweb | 234 | 50.23 |
Learn to Code iOS Apps 1: Welcome to Programming
During your years of iPhone usage have you ever thought “Gee, I wish I could write a mobile app”, or even “Sheesh, I could totally write a better app than that!”?
You’re in luck – developing an iOS app is not hard. In fact, there are numerous tools that make developing your own iOS app easy and fun. Armed with a little knowledge and these tools, you too can learn to code iOS apps!
This tutorial series will teach you how to make an iOS app from scratch. No knowledge of programming is required to follow this tutorial series — the entire process is broken down into a sequence of steps that will take you from programming zero to App Store hero.
This series has four parts:
- In Part 1 (You are Here!), you will learn the basics of Objective-C programming and you will start to create your first simple game. You are here!
- In Part 2,.
The only prerequisite to this series is a Mac running OS X Lion (10.7) or later – and having a willingness to learn! :]
Note: If you’re already familiar with the basics of Objective-C and Foundation, feel free to skip ahead to Part 3 and get started with iOS.
Getting Started
The first thing you need to do is install a free program called Xcode..
“Wait a minute,” you may think, “Why am I creating a Mac OSX command line app, I wanted to make an iPhone app!”
Well, both Native Mac and iOS apps are both written in the same programming language — Objective-C — and use the same set of tools to create and build applications. So starting with a command line app is the simplest way to start learning the basics. Once you’ve mastered doing some basic things there, making an iPhone app (like you’ll do later in this series) will be that much easier!
So let’s get started. Open up Xcode, and you’ll see a window that looks like this:
Click the button that says Create a new Xcode project, located directly below the Welcome to Xcode title, as shown in the screenshot below:
If you accidentally close the “Welcome to Xcode” window, you can create a new project by going to the File menu and selecting New > Project….
In the column on the left hand side, find the OS X section, click on Application and select Command Line Tool as shown below:
Click Next. On the following screen, fill in the fields as indicated:
- Product Name: My First Project
- Organization Name: This field can be left blank. Or you can enter your company name.
- Company Identifier: Enter com.yourname, such as com.johnsmith
- Type: Foundation
- Use Automatic Reference Counting: Check this box
Your screen should resemble the one below:
Click Next. Choose a location to store the project files (the Desktop is as good a place as any), and click Create. Xcode will set up your new project and open it up in the editor for you.
Running Your First App
Xcode comes with project templates which include some basic starter code; that means that even before you’ve written a line of code, you can run your project and see what it looks like. Granted, your project won’t do much right now, but this is a good opportunity to become familiar with running your project and viewing the output.
To build and run your project, find the Run button on the upper left corner of the Xcode window, as shown below, and click it:
Look at the bottom of the screen in the All Output pane; you should see Hello, World! displayed there, as shown below:
How about that — you’ve created and run your first OS X program! Before you go adding more functionality to your program, take a few minutes and got through the following sections to learn about the various parts of Xcode and how your program is structured.
Note: If you want to learn more about Xcode and how to use it, you can always refer to the Apple Xcode User Guide.
The left pane of Xcode displays a list of files that are part of the project. The files you see were automatically created by the project template you used. Find main.m inside the My First Project folder and click on it to open it up in the editor, as shown below:
The editor window should look very similar to the following screenshot:
Find the following line located around the middle of the file:
Aha — this looks like the line that printed out the text that you saw in the “All Output” pane. To be certain of that, change the text to something else. Modify the line as shown below:
Click the Run button; you should see your new text in the “All Output” pane as shown below:
You have now changed your program to output your own custom message. But there’s obviously more to the app than just a single line of output. What makes the app tick?
The Structure of Your Source Code
main.m is the source code of your application. Source code is like a list of instructions to tell the computer what you want it to do.
However, a computer cannot run source code directly. Computers only understand a language called machine code, so there needs to be an intermediate step to transform your high-level source code into instructions that the CPU can carry out. Xcode does this when it builds and runs your app by compiling your source code. This step processes the source code and generates the corresponding machine code.
If this sounds complicated, don’t worry — you don’t need to know anything about the machine language part other than to know it’s there. Both you and the compiler understand Objective-C code, so that’s the common language you’ll use to communicate.
At the top of main.m, you’ll see several lines beginning with two slashes (
//), as shown in the screenshot below:
These lines are comments and will be ignored by the compiler. Comments are used to document the code of your app and leave any tidbits of information that other programmers — or your future self — might find useful. Look at the middle of the file and you will see a perfect example of this:
The comment
// insert code here... is part of the project template from Xcode. It doesn’t change how the program runs, but it was put there by some helpful engineer at Apple to help you understand the code and get started.
Import Statements
Directly below the comments at the top of main.m is the following line:
That line of code is known as an import statement. In Xcode, not everything has to be contained in one single file; instead, you can use code contained in separate files. The import statement tells the compiler that “when you compile this app, also use the code from this particular file”.
As you can imagine, developing for OS X and iOS requires a lot of diverse functionality, ranging from dealing with text, to making requests over a network, to finding your location on a map. Rather than include a veritable “kitchen sink” of functionality into every app you create, import statements allow you to pick which features you require for your app to function. This helps to decrease the size of your code, the processing overhead required, and compile time.
Apple bundles OS features into frameworks. The import statement shown above instructs the compiler to use the Foundation framework, which provides the minimum foundation (as the name suggests) for any app.
Here’s a bit of trivia for you: how many lines of code do you think Foundation/Foundation.h adds to your main.m file? 10? 1000? 100000? A million?
The Main Function
Look at the line following the
import statement:
This line declares a function called
main. All of the code in your app that provides some type of processing or logic is encapsulated into functions; the
main function is what kicks off the whole app.
Think of a function as a unit of code that accepts input and produces output. For example, a function could take an account number, look it up in a database, and return the account holder’s name.
The
int part of
int main means the return value of
main returns an integer such as 10 or -2. The
(int argc, const char * argv[]) bits in parentheses are the arguments, or inputs, to the function. You’ll revisit the arguments of a function a bit later on.
Immediately below
int main is an open curly brace (
{) which indicates the start of the function. A few lines down you’ll see the corresponding closing curly brace (
}). Everything contained between the two braces is part of the
main function.
Since Objective-C is a procedural language, your program will start at the top of
main and execute each line of the function in order. The first line of
main reads as follows:
Just like in
main, curly braces are used to surround a group of related lines of code. In this case, everything between the braces are part of a common autorelease pool.
Autorelease pools are used to manage memory. Every object you use in an app will consume some amount of memory — everything from buttons, to text fields, to advanced in-memory storage of user data eats away at the available memory. Manual memory management is a tricky task, and you’ll find memory leaks in lots of code — even code written by expert programmers!
Instead of tracking all the objects that consume memory and freeing them when you’re done with them,
autoreleasepool automates this task for you. Remember when you created your project in Xcode and checked “Use Automatic Reference Counting”? Automatic Reference Counting, or ARC, is another tool that helps manage memory in your app so you almost never need to worry about memory usage yourself.
You’ll recognize the next line; it’s the one that you edited to create a custom message:
The
NSLog function prints out text to the console, which can be pretty handy when you’re debugging your code. Since you can’t always tell exactly what your app is doing behind the scenes,
NSLog statements help you log the actions of your app by printing out things like strings or the values of variables. By analyzing the
NSLog output, you’ll gain some insight as to what your app is doing.
If you’re worried about your end user seeing
NSLog statements on their iPhones, don’t fret — the end user won’t see the
NSLog output anywhere in the app itself.
In programming, text inside double quotation marks is known as a string. A string is how you store words or phrases. In Objective-C, strings are prefixed with an
@ sign.
Look at the end of the
NSLog line, you’ll see that the line is terminated by a semicolon. What does that do?
The Objective-C compiler doesn’t use line breaks to decide when one “line” of code ends and when one begins; instead, semicolons indicate the end of a single line of code. The
NSLog statement above could be written like this:
…and it would function in the same manner.
To see what happens when you don’t terminate a line of code with a semicolon, delete the semicolon at the end of the
NSLog statement, then press the Run button. You’ll see the following error indicated in Xcode:
The
NSLog line is highlighted in red, and a message states
"Expected ';' after expression". Syntax errors like this stop the compiler in its tracks, and the compiler won’t be able to continue until you fix the issue. In this case, the correction is simple: just add the semicolon at the end of the line, and your program will compile and run properly.
There’s just one more line of code to look at in
main:
This line of code is known as a return statement. The function terminates when this line is encountered; therefore any lines of code following the return statement will not execute. Since this is the main function, this
return statement will terminate the entire program.
What does the “0” mean after the
return statement? Recall that this function was declared as
int main, which means the return value has to be an integer. You’re making good on that promise by returning the integer value “0”. If there are no actual values to be returned to the caller of this function, zero is typically used as the standard return value for a function to indicate that it completed without error.
Working With Variables
Computers are terribly good at remembering pieces of information such as names, dates, and photos. Variables provide ways for you to store and manipulate these types of objects in your program. There are four basic types of variables:
- int: stores a whole number, such as 1, 487, or -54.
- float: stores a floating-point number with decimal precision, such as 0.5, 3.14, or 1.0
- char: stores a single character, such as “e”, “A”, or “$”.
- BOOL stores a YES or NO value, also known as a “boolean” value. Other programming languages sometimes use TRUE and FALSE.
To create a variable — also known as declaring a variable — you simply specify its type, give it a name and optionally provide a default value.
Add the following line of code to main.m between the
@autoreleasepool line and the
NSLog line:
Don’t forget that all-important semicolon!
The line above creates a new integer variable called num and assigns it a value of 400.
Now that you have a variable to use in your app, test it out with an
NSLog statement. Printing out the values of variables is a little more complicated than printing out strings; you can’t just put the word “num” in the message passed to
NSLog and see it output to the console.
Instead, you need to use a construct called format specifiers which use placeholders in the text string to show
NSLog where to put the value of the variable.
Find the following line in main.m:
…and replace it with the following line of code:
Click the Run button in the upper left corner. You should get a message in the console that says:
That looks great — but how did Xcode know how to print out the value of
num?
The
%i in the code above is a format specifier that says to Xcode “replace this placeholder with the first variable argument following this quoted string, and format it as an integer”.
What if you had two values to print out? In that case, the code would look similar to the following:
Okay, so
%i is used for integer formatting. But what about other variable types? The most common format specifiers are listed below:
- %i: int
- %f: float
- %c: char
There isn’t a specific format specifier for boolean values. If you need to display a boolean value, use
%i; it will print out “1” for YES and “0” for NO.
Along with declaring variables and setting and printing values, you can also perform mathematical operations directly in your code.
Add the following line to main.m, immediately below the
int num = 400; line:
The above code takes the current value of
num, adds 100 to it, and then replaces the original value of
num with the new sum — 500.
Press the Run button in the upper left corner; you should see the following output in your console:
That’s enough theory to get started — you’re probably itching to start coding your first real app!
Building Your First Game
The application you’ll create in this tutorial is the classic game “Higher or Lower”. The computer generates a secret random number and prompts you to guess what that number is. After each successive guess, the computer tells you if your guess was too high or too low. The game also keeps track of how many turns it took for you to guess the correct number.
To get started, clear out all of the lines in the
@autoreleasepool block of main.m so that
main looks like the code below:
All the code you add in the steps below will be contained between the curly braces of the the
@autoreleasepool block.
You’re going to need three variables: one to store the correct answer, one to store the player’s guess and one to store the number of turns.
Add the following code within the
@autoreleasepool block:
The code above declares and initializes the three variables you need for your game. However, it won’t be much fun to play the game if
answer is always zero. You’ll need something to create random numbers.
Fortunately, there’s a built-in random number generator,
arc4random, which generates random numbers for you. Neat!
Add the following code directly below the three variable declarations you added earlier:
answer now stores a random integer. The
NSLog line is there to help you test your app as you go along.
Click the Run button in the upper left corner and check your console output. Run your app repeatedly to see that it generates a different number each time. It seems to work well, but what do you notice about the numbers themselves?
The numbers have a huge range — trying to guess a number between 1 and 1228691167 doesn’t sound like a lot of fun. You’ll need to scale those numbers back a little to generate numbers between 1 and 100.
There’s an arithmetic operator called the modulo operator — written as
% in Objective-C — that can help you with this scaling. The modulo operation simply divides the first number by the second number and returns the remainder. For example,
14705 % 100 will produce
5, as 100 goes into 14705 a total of 147 times, with a remainder of 5.
To scale your values back between 1 and 100, you can simply use the above trick on your randomly generated numbers. However, if you divide the randomly generated number by 100, you’ll end up with numbers that range from 0 to 99. So, you simply need to add 1 to the remainder to get values that range from 1 to 100.
Find the following line in your code:
…and modify it to look like the line below:
Run your app a few times and check the console output. Instead of huge numbers, your app should only produce numbers between 1 and 100.
You now know how to create and display information to your user, but how do you go about accepting input from the user to use in your app?
That’s accomplished by the
scanf function — read on to learn how it works.
Obtaining User Input
Add the following lines of code immediately after the previously added code:
Aha — that
%i looks familiar, doesn’t it? Format specifiers are used for output and input functions in your app. The
%i format specifier causes
scanf to process the player’s input as an integer.
Run your app; when you see the “Enter a number” prompt, click your mouse in the console to make the cursor appear. Type a number and press Enter; the program should print the number back to you, as shown in the screenshot below:
Now that you’ve confirmed that the random number generator and the user input methods work, you don’t need your debug statements any longer. Remove the following two
NSLog statements from your code:
and
Okay — you have the basic user input and output methods in place. Time to add some game logic.
Working With Conditionals
Right now, your code runs from top to bottom in a linear fashion. But how do you handle the situation where you need to perform different actions based on the user’s input?
Think about the design of your game for a moment. Your game has three possible conditions that need to be checked, and a set of corresponding actions:
-. Conditionals work by determining if a particular set of conditions is true. If so, then the app will perform the corresponding specific set of actions.
Add the following lines of code immediately after the
scanf("%i", &guess); line:
The conditional statement above starts with an
if statement and provides a set of conditions inside the parentheses. In the first block, the condition is “is
guess greater than
answer?”. If that condition is true, then the app executes the actions inside the first set of curly braces, skips the rest of the conditional statement, and carries on.
If the first condition was not met, the reverse condition is tested with an
else if statement: “is
guess less than
answer?”. If so, then the app executes the second set of actions inside the curly braces.
Finally, if neither of the first two conditions are true, then the player must have guessed the correct number. In this case, the app executes the third and final set of actions inside the curly braces. Note that this
else statement doesn’t have any conditions to check; this acts as a “catch-all” condition that will execute if none of the preceding conditions were true.
There are many different comparison operators that you can use in your
if statements, including the ones listed below:
- > : greater than
- < : less than
- >= : greater than or equal to
- <= : less than or equal to
- == : equal to
- != : not equal to
Note: To check if two variables are equal, use two equal signs. A single equals sign is the assignment operator, which assigns a value to a variable. It’s an easy mistake to make, but just remember that “equal TO” needs “TWO equals”! :]
Run your app, and try to guess the number that the computer chose. What happens after you make one guess?
Right now you can only enter one guess before the program quits. Unless you are extremely good at guessing — or psychic! :] — your app will tell you that your guess is incorrect and terminate.
Well, that’s no fun. You need some way to loop back to some point in the the program and give the player another chance to guess. Additionally, you want the app to stop when the player guesses the correct number.
This is a job for a while loop.
Working With While Loops
A while loop is constructed much like an
if statement; they both have a condition and a set of curly braces that contain code to execute if the condition is true.
An
if statement runs a code block only once, but a while loop will run the block of code repeatedly until the condition is no longer true. That means your code block needs an exit condition that makes the condition false to end the execution of the while loop. If you don’t have an exit condition, the loop could run forever!
The first question is which code needs to be inside the while loop. You don’t want to loop over the random number generation with the
arc4random statement, or else the player will be guessing a new random number each time! Just the user prompt, scanf, and the conditional
if block needs to be looped over.
The other question is how to create your exit condition. The repeat condition is to loop while
guess does not match
answer. This way, as soon as the user guesses the correct number, the exit condition occurs automatically.
Note that you will need to add two lines to your existing code to wrap your game logic in a while loop: the
while statement itself, and the closing curly brace to close off the
while loop.
Modify your code to include the two lines indicated by the comments below:
Run your app, and play through the game a few times. How good of a guesser are you?
Adding the Final Touches
You now have a functional game! There’s only one thing to add: the turn counter. This will give your player some feedback on their gameplay.
The
turn variable has already been created to store this information, so it’s just a matter of incrementing the value of
turn each time the player makes a guess.
Add the following line of code directly underneath the
while (guess != answer) { statement:
turn++; increments the count by one. Why don’t you just use
turn = turn + 1;, you ask? Functionally, it’s the same thing. However, incrementing a variable is such a common programming task that it pays to have a shorthand method to save on typing.
Fun Fact: The “C” programming language was derived from a previous language called “B”. When the next iteration of the C language was written, the developers put their tongue firmly in cheek and named the new language “C++” — meaning “one better than C”. :]
All that’s left to do is display the current value of
turn in two places: on the user prompt, and at the end of the game.
Find the following line of code:
…and modify it to look like the line below:
The code above uses the format specifier
%i to display the current value of
turn in the user prompt.
Add the following line of code immediately after the closing curly brace of the while loop:
This will display the final number of guesses once the player has guessed the correct number.
If you feel adventurous, instead of adding the above line to log the number of turns after the while loop, you could also modify the congratulatory message to output the number of turns right there. But I’ll leave that as an exercise for you :]
Take a minute and review the contents of
main in your app to make sure that it matches the code below:
Run your app and check out the latest changes!
Where To Go From Here?
By creating this small app, you’ve learned some of the most fundamental concepts in Objective-C, namely:
- functions
if..
elseblocks
- format specifiers
- while loops
The final project with full source code can be found here.
You’re now ready to move on to the next tutorial in this series, where you’ll learn about some more fundamental concepts in Objective-C, including working with objects and classes.
If you have any question or comments, come join the discussion on this series in the forums!
47 Comments: ... iguration/
NSLog(@"Guess #%i: Enter a number between 1 and 100", turn);
what is function of symbol '#'? | http://www.raywenderlich.com/38557/learn-to-code-ios-apps-1-welcome-to-programming | CC-MAIN-2015-14 | refinedweb | 4,463 | 69.41 |
Learning ActionScript's Basic Game Framework by Creating A Matching Game
- Placing Interactive Elements
- Game Play
- Encapsulating the Game
- Adding Scoring and a Clock
- Adding Game Effects
- Modifying the Game
- Placing Interactive Elements
- Game Play
- Encapsulating the Game
- Adding Scoring and a Clock
- Adding Game Effects
- Modifying the Game
To build our first game, I've chosen one of the most popular games you will find on the Web and in interactive and educational software: a matching game..
A good player is one who remembers what cards he or she sees when a match is not made, and can determine where pairs are located after several failed tries.
Computer versions of matching games have advantages over physical versions: You don't need to collect, shuffle, and place the cards to start each game. The computer does that for you. It is also easier and less expensive for the game developer to create different pictures for the cards with virtual cards rather than physical ones.
To create a matching game, we first work on placing the cards on the screen. To do this, we need to shuffle the deck to place the cards in a random order each time the game is played.
Then, we take the player's input and use that to reveal the pictures on a pair of cards. Then, we compare the cards and remove them if they match.
We also need to turn cards back to their face-down positions when a match is not found. And then we need to check to see when all the pairs have been found so that the game can end.
Placing Interactive Elements
Creating a matching game first requires that you create a set of cards. Because the cards need to be in pairs, we need to figure out how many cards will be displayed on the screen, and make half that many pictures.
For instance, if we want to show 36 cards in the game, there will be 18 pictures, each appearing on 2 cards.
Methods for Creating Game Pieces
There are two schools of thought when it comes to making game pieces, like the cards in the matching game.
Multiple-Symbol Method
The first method is to create each card as its own movie clip. So, in this case, there will be 18 symbols. Each symbol represents a card.
One problem with this method is that you will likely be duplicating graphics inside of each symbol. For instance, each card would have the same border and background. So, you would have 18 copies of the border and background.
Of course, you can get around this by creating a background symbol that is then used in each of the 18 card symbols.
But the multiple-symbol method still has problems when it comes to making changes. For instance, suppose you want to resize the pictures slightly. You'd need to do that 18 times for 18 different symbols.
Also, if you are a programmer teaming up with an artist, it is inconvenient to have the artist update 18 or more symbols. If the artist is a contractor, it could run up the budget as well.
Single-Symbol Method
The second method for working with a set of playing pieces, such as cards, is a single-symbol method. You would have one symbol, a movie clip, with multiple frames. Each frame contains the graphics for a different card. Shared graphics, such as a border or background, can be on a layer in the movie clip that stretches across all the frames.
This method has major advantages when it comes to updates and changes to the playing pieces. You can quickly and easily move between and edit all the frames in the movie clip. You can also easily grab an updated movie clip from an artist with whom you are working.
Setting Up the Flash Movie
Using the single-symbol method, we need to have at least one movie clip in the library. This movie clip will contain all the cards, and even a frame that represents the back of the card that we must show when the card is face down.
Create a new movie that contains a single movie clip called Cards. To create a new movie in Flash CS3, choose File, New, and then you will be presented with a list of file types. You must choose Flash File (ActionScript 3.0) to create a movie file that will work with the ActionScript 3.0 class file we are about to create.
Put at least 19 frames in that movie clip, representing the card back and 18 card fronts with different pictures on them. You can open the MatchingGame1.fla file for this exercise if you don't have your own symbol file to use.
Figure 3.1 shows a timeline for the Card movie clip we will be using in this game. The first frame is "back" of the card. It is what the player will see when the card is supposed to be face down. Then, each of the other frames shows a different picture for the front of a card.
Figure 3.1 The Card movie clip is a symbol with 37 frames. Each frame represents a different card.
After we have a symbol in the library, we need to set it up so that we can use it with our ActionScript code. To do this, we need to set its properties by selecting it in the library and bringing up the Symbol Properties dialog box (see Figure 3.2).
Figure 3.2 The Symbol Properties dialog box shows the properties for the symbol Card.
Set the symbol name to Card and its type to Movie Clip. For ActionScript to be able to work with the Cards movie clip, it needs to be assigned a class. By checking the Export for ActionScript box, we automatically get the class name Card assigned to the symbol. This will be fine for our needs here.
There is nothing else needed in the Flash movie at all. The main timeline is completely empty. The library has only one movie clip in it, the Cards movie clip. All that we need now is some ActionScript.
Creating the Basic ActionScript Class
To create an ActionScript class file, choose File, New, and then select ActionScript File from the list of file types; by doing so you create an untitled ActionScript document that you can type into.
We start off an ActionScript 3.0 file by defining it as a package. This is done in the first line, as you can see in the following code sample:
package { import flash.display.*;
Right after the package declaration, we need to tell the Flash playback engine what classes we need to accomplish our tasks. In this case, we go ahead and tell it we'll be needing access to the entire flash.display class and all its immediate subclasses. This will give us the ability to create and manipulate movie clips like the cards.
The class declaration is next. The name of the class must match the name of the file exactly. In this case, we call it MatchingGame1. We also need to define what this class will affect. In this case, it will affect the main Flash movie, which is a movie clip:
public class MatchingGame1 extends MovieClip {
Next is the declaration of any variables that will be used throughout the class. However, our first task of creating the 36 cards on the screen is so simple that we don't need to use any variables. At least not yet.
Therefore, we can move right on to the initialization function, also called the constructor function. This function runs as soon as the class is created when the movie is played. It must have exactly the same name as the class and the ActionScript file:
public function MatchingGame1():void {
This function does not need to return any value, so we can put :void after it to tell Flash that nothing will ever be returned from this function. We can also leave the :void off, and it will be assumed by the Flash compiler.
Inside the constructor function we can perform the task of creating the 36 cards on the screen. We'll make it a grid of 6 cards across by 6 cards down.
To do this, we use two nested for loops. The first moves the variable x from 0 to 5. The x will represent the column in our 6x6 grid. Then, the second loop will move y from 0 to 5, which will represent the row:
for(var x:uint=0;x<6;x++) { for(var y:uint=0;y<6;y++) {
Each of these two variables is declared as a uint, an unsigned integer, right inside the for statement. Each will start with the value 0, and then continue while the value is less than 6. And, they will increase by one each time through the loop.
So, this is basically a quick way to loop and get the chance to create 36 different Card movie clips. Creating the movie clips is just a matter of using new, plus addChild. We also want to make sure that as each new movie clip is created it is stopped on its first frame and is positioned on the screen correctly:
var thisCard:Card = new Card(); thisCard.stop(); thisCard.x = x*52+120; thisCard.y = y*52+45; addChild(thisCard); } } } } }
The positioning is based on the width and height of the cards we created. In the example movie MatchingGame1.fla, the cards are 50 by 50 with 2 pixels in between. So, by multiplying the x and y values by 52, we space the cards with a little extra space between each one. We also add 120 horizontally and 45 vertically, which happens to place the card about in the center of a 550x400 standard Flash movie.
Before we can test this code, we need to link the Flash movie to the ActionScript file. The ActionScript file should be saved as MatchingGame1.as, and located in the same directory as the MatchingGame1.fla movie.
However, that is not all you need to do to link the two. You also need to set the Flash movie's Document class property in the Property Inspector. Just select the Properties tab of the Property Inspector while the Flash movie MatchingGame1.fla is the current document. Figure 3.3 shows the Property Inspector, and you can see the Document class field at the bottom right.
Figure 3.3 You need to set the Document class of a Flash movie to the name of the AS file that contains your main script.
Figure 3.4 shows the screen after we have tested the movie. The easiest way to test is to go to the menu and choose Control, Test Movie.
Figure 3.4 The screen shows 36 cards, spaced and in the center of the stage.
Using Constants for Better Coding
Before we go any further with developing this game, let's look at how we can make what we have better. We'll copy the existing movie to MatchingGame2.fla and the code to MatchingGame2.as. Remember to change the document class of MatchingGame2.fla to MatchingGame2 and the class declaration and constructor function to MatchingGame2.
Suppose you don't want a 6x6 grid of cards. Maybe you want a simpler 4x4 grid. Or even a rectangular 6x5 grid. To do that, you just need to find the for loops in the previous code and change the loops so that they loop with different amounts.
A better way to do it is to remove the specific numbers from the code all together. Instead, have them at the top of your code, and clearly labeled, so that you can easily find and change them later on.
We've got several other hard-coded values in our programs. Let's make a list.
Instead of placing these values in the code, let's put them in some constant variables up in our class, to make them easy to find and modify:
public class MatchingGame2 extends MovieClip { // game constants private static const boardWidth:uint = 6; private static const boardHeight:uint = 6; private static const cardHorizontalSpacing:Number = 52; private static const cardVerticalSpacing:Number = 52; private static const boardOffsetX:Number = 120; private static const boardOffsetY:Number = 45;
Now that we have constants, we can replace the code in the constructor function to use them rather than the hard-coded numbers:
public function MatchingGame2():void { for(var x:uint=0;x<boardWidth;x++) { for(var y:uint=0;y<boardHeight;y++) { var thisCard:Card = new Card(); thisCard.stop(); thisCard.x = x*cardHorizontalSpacing+boardOffsetX; thisCard.y = y*cardVerticalSpacing+boardOffsetY; addChild(thisCard); } } }
You can see that I also changed the name of the class and function to MatchingGame2. You can find these in the sample files MatchingGame2.fla and MatchingGame2.as.
In fact, open those two files. Test them one time. Then, test them again after you change some of the constants. Make the boardHeight only five cards, for instance. Scoot the cards down by 20 pixels by changing boardOffsetY. The fact that you can make these changes quickly and painlessly drives home the point of using constants.
Shuffling and Assigning Cards
Now that we can add cards to the screen, we want to assign the pictures randomly to each card. So, if there are 36 cards in the screen, there should be 18 pairs of pictures in random positions.
Chapter 2, "ActionScript Game Elements," discussed how to use random numbers. However, we can't just pick a random picture for each card. We need to make sure there are exactly two of each type of card on the screen. No more, no less; otherwise, there will not be matching pairs.
To do this, we need to create an array that lists each card, and then pick a random card from this array. The array will be 36 items in length, containing 2 of each of the 18 cards. Then, as we create the 6x6 board, we'll be removing cards from the array and placing them on the board. When we have finished, the array will be empty, and all 18 pairs of cards will be accounted for on the game board.
Here is the code to do this. A variable i is declared in the for statement. It will go from zero to the number of cards needed. This is simply the board width times the board height, divided by two (because there are two of each card). So, for a 6x6 board, there will be 36 cards. We must loop 18 times to add 18 pairs of cards:
// make a list of card numbers var cardlist:Array = new Array(); for(var i:uint=0;i<boardWidth*boardHeight/2;i++) { cardlist.push(i); cardlist.push(i); }
The push command is used to place a number in the array, twice. Here is what the array will look like:
0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15,16,16,17,17
Now as we loop to create the 36 movie clips, we'll pull a random number from this list to determine which picture will display on each card:
for(var x:uint=0;x<boardWidth;x++) { // horizontal for(var y:uint=0;y<boardHeight;y++) { // vertical var c:Card = new Card(); // copy the movie clip c.stop(); // stop on first frame c.x = x*cardHorizontalSpacing+boardOffsetX; // set position c.y = y*cardVerticalSpacing+boardOffsetY; var r:uint = Math.floor(Math.random()*cardlist.length); // get a random face c.cardface = cardlist[r]; // assign face to card cardlist.splice(r,1); // remove face from list c.gotoAndStop(c.cardface+2); addChild(c); // show the card } }
The new lines are in the middle of the code. First, we use this line to get a random number between zero and the number of items remaining in the list:
var r:uint = Math.floor(Math.random()*cardlist.length);
The Math.random() function will return a number from 0.0 up to just before 1.0. Multiply this by cardlist.length to get a random number from 0.0 up to 35.9999. Then use Math.floor() to round that number down so that it is a whole number from 0 to 35—that is, of course, when there are 36 items in the cardlist array at the start of the loops.
Then, the number at the location in cardlist is assigned to a property of u named cardface. Then, we use the splice command to remove that number from the array so that it won't be used again.
In addition, the MatchingGame3.as script includes this line to test that everything is working so far:
c.gotoAndStop(c.cardface+2);
This syntax makes the Card movie clip show its picture. So, all 36 cards will be face up rather than face down. It takes the value of the property cardface, which is a number from 0 to 17, and then adds 2 to get a number from 2 to 19. This corresponds to the frames in the Card movie clip, where frame 1 is the back of the card, and frames 2 and so on are the picture faces of the cards.
Obviously, we don't want to have this line of code in our final game, but it is useful at this point to illustrate what we have accomplished. Figure 3.5 shows what the screen might look like after we run the program with this testing line in place.
Figure 3.5 The third version of our program includes code that reveals each of the cards. This is useful to get visual confirmation that your code is working so far. | http://www.informit.com/articles/article.aspx?p=1013848 | CC-MAIN-2017-04 | refinedweb | 2,987 | 81.12 |
Created on 2010-11-21 05:37 by v+python, last changed 2020-11-16 21:55 by iritkatriel.
The CGI interface is a binary stream, because it is pumped directly to/from the HTTP protocol, which is a binary stream.
Hence, cgitb.py should produce binary output. Presently, it produces text output.
When one sets stdout to a binary stream, and then cgitb intercepts an error, cgitb fails.
Demonstration of problem:
import sys
import traceback
sys.stdout = open("sob", "wb") # WSGI sez data should be binary, so stdout should be binary???
import cgitb
sys.stdout.write(b"out")
fhb = open("fhb", "wb")
cgitb.enable()
fhb.write("abcdef") # try writing non-binary to binary file. Expect an error, of course.
So since cgi.py was fixed to use the .buffer attribute of sys.stdout, that leaves sys.stdout itself as a character stream, and cgitb.py can successfully write to that.
If cgitb.py never writes anything but ASCII, then maybe that should be documented, and this issue closed.
If cgitb.py writes non-ASCII, then it should use an appropriate encoding for the web application, which isn't necessarily the default encoding on the system. Some user control over the appropriate encoding should be given, or it should be documented that the encoding of sys.stdout should be changed to an appropriate encoding, because that is where cgitb.py will write its character stream. Guidance on how to do that would be appropriate for the documentation also, as a CGI application may be the first one a programmer might write that can't just use the default encoding configured for the system. | https://bugs.python.org/issue10479 | CC-MAIN-2021-17 | refinedweb | 273 | 69.28 |
Given an IP Address, the task is to validate this IP address and check whether it is IPv6 or not with the help of ReGex(Regular Expression). If the IP Address is valid then print “IPv6 Address” otherwise print “Not”.
A valid IPv4 address is an IP in the form "XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX" where each Xi digit is a hexadecimal digit. For example,
Input-1 −
IP= “3001:0da8:82a3:0:0:8B2E:0270:7224”
Output −
“Not”
Explanation − This is not a valid IPv6 address, return “Not”.
Input-2 −
IP= “2001:0db8:85a3:0000:0000:8a2e:0370:7334”
Output −
“IPv6”
Explanation − This is a valid IPv6 Address, return “IPv6”.
To check whether the given IP address is IPv6 or not, we use ReGex. A ReGex is an expression that contains a sequence of characters that define a specific pattern. These patterns can be used in algorithms to match the pattern in a string. It is also widely used for Input Validation.
Range Specification − We can specify the characters to make the patterns in the simplest way. To specify the range by using characters, we can use ‘[ ]’ brackets.
Specifying Characters − The above expression indicates an opening bracket and a digit in the range a to z , ‘A’ to ‘Z’ and ‘0’ to ‘9’ as a regex.
[a-z], [A-Z] and [0-9].
Repeated Patterns − An expression modifier can be “+” that suggests matching the occurrence of a pattern one or more times or it can be “*” that suggests matching the occurrence of a pattern zero or more times.
The expression [a-z]* will match a blank string.
If you want to specify a group of characters to match one or more times, then you can use the parentheses as follows −
[Abc]+
Take Input a string specifying a IP address.
A string function validIPAddress(string IP) takes IP address as input and checks whether the input string is valid or not. If it is valid then return “IPv6” otherwise return “Not an IP Address”.
Creating a regex pattern for the IPv6 address. Since an IPv6 address contains 8 fields in which each field contains the value digits represented as hexadecimal. An IPv6 address looks like XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX separated by ‘colon’.
A valid IPv6 address might be in the range ([0-9a-fA-F]){1,4})\\:){7}([0-9a-fA-F]){1,4}) in which the first digit will be in the ranges from 0-9, second is hexadecimal alphanumeric digit.
Similarly, for the second field first character would be in the range of 0-9a-fA-F thus the regex pattern would be ‘[0-9a-fA-F]’
#include<bits/stdc++.h> using namespace std; string validIPAddress(string IP) { regex ipv6("((([0-9a-fA-F]){1,4})\\:){7}([0-9a-fA-F]){1,4}"); if(regex_match(IP, ipv6)) return "IPv6"; else return "Not"; } int main(){ string IP= “3001:0da8:82a3:0:0:8B2E:0270:7224”; string ans= validIPAddress(IP); cout<<ans<<endl; return 0; }
Running the above code will generate the output as,
Not
Since the input IP Address is not a valid IP address, we will return “Not”. | https://www.tutorialspoint.com/validate-ipv6-address-using-regex-patterns-in-cplusplus | CC-MAIN-2021-25 | refinedweb | 526 | 69.72 |
After you add the webhook URL of a DingTalk chatbot, Enterprise Distributed Application Service (EDAS) can send alert notifications to the related DingTalk group. This increases your O&M efficiency and allows you to know alert events at the earliest opportunity.
Add a custom DingTalk chatbot and obtain the webhook URL
- Run the DingTalk client on a PC, go to the DingTalk group to which you want to add an alert chatbot, and then click the Group Settings icon in the upper-right corner.
- In the Group Settings panel, click Group Assistant.
- In the Group Assistant panel, click Add Robot.
- In the ChatBot dialog box, click the + icon in the Add Robot card. Then, click Custom.
- In the Robot details dialog box, click Add.
- In the Add Robot dialog box, edit the profile picture, enter a chatbot name, and then select at least one of the options in the Security Settings section. Read the DingTalk Custom Robot Service Terms of Service and select I have read and accepted DingTalk Custom Robot Service Terms of Service. Click Finished.
- In the Add Robot dialog box, click Copy to save the webhook URL of the chatbot and click Finished.
Create contacts
- Log on to the EDAS console.
- In the left-side navigation pane, click Applications. In the top navigation bar, select the region where the application whose alert rules you want to manage is deployed. In the upper part of the Applications page, select the namespace where the application is deployed.
- On the Applications page, select Container Service or Serverless Kubernetes Cluster from the Cluster Type drop-down list, and click the name of the application that you want to manage.
- In the left-side navigation pane, choose .
- On the Contact tab, click New contact in the upper-right corner.
- In the New contact dialog box, set the Name parameter, enter the obtained webhook URL of the DingTalk chatbot in the DingTalk robot field, select Whether to receive system notifications as needed, and then click OK.
Create an alert group
- In the left-side navigation pane, choose .
- On the Contact Group tab, click Create a contact group in the upper-right corner.
- In the Create a contact group dialog box, set the Group name parameter, set the Alarm contact parameter to the created DingTalk chatbot, and then click OK.
Create an alert
- In the left-side navigation pane, choose .
- On the Alarm Policies page, click Create Alarm in the upper-right corner.
- In the Create Alarm dialog box, set related parameters and click Save.
- Set the Alarm Name parameter. For example, you can enter alert on JVM-GC times in period-over-period comparison.
- Select an application from the Application Site drop-down list and select an application group from the Application Group drop-down list.
- Select a metric type from the Type drop-down list. For example, you can select JVM_Monitoring.
- Set the Dimension parameter to Traverse.
- Set the Alarm Rules parameter.
- Select Meet All of the Following Criteria.
- Configure an alert rule. For example, an alert is triggered when the average value of JVM_FullGC within the last 5 minutes (N = 5) increases by 100% compared with that in the previous hour.Note Click the + icon next to the Last N Minutes parameter to create multiple alert rules.
- Select Ding Ding Robot for Notification Mode.
- In the Notification Receiver section, select the contact group that you create in Create an alert group. In the Contact Groups list, click the name of a contact group. If the contact group appears in the Selected Groups list, the setting is successful. | https://www.alibabacloud.com/help/en/enterprise-distributed-application-service/latest/set-alarm-rules-for-dingtalk-robot | CC-MAIN-2022-33 | refinedweb | 596 | 65.22 |
Red Hat Bugzilla – Bug 75451
RFE: Make rpm fail immediately at beginning of build with invalid Group
Last modified: 2007-11-30 17:10:30 EST
Bugzilla in IRC format. ;o)
<notting> packages in invalid groups, yay
<jgarzik> the recalcitrant laptop is installing Linux, whee
* jgarzik kicks IBM laptops
<jbj> notting: Fascist policy available in rpmbuild for the asking in bugzilla
<mharris> jbj: Sounds like a good idea IMHO
<owen> jgarzik: Not my laptop you don't!
--- twaugh is now known as twaugh_away
<jbj> mharris: someone has to ask
<-- havill has quit (Remote closed the connection)
* jgarzik switches owen's laptop to Norweigan locale when he's not looking
<mharris> jbj: I'm asking. ;o)
<_Anarchy_> jg: so how is indoctrination going ?
<jrb> another jg?
<jrb> we need to namespace initials.
--> havill (~chatzilla@gaijin.devel.redhat.com) has joined #devel
--- qabot gives channel operator status to havill
<jbj> mharris: in bugzilla please, I scour my neurons weekly
Closing bugs on older, no longer supported, releases. Apologies for any lack of
response.
For RPM issues, please try a current release such as Fedora Core 4; if bugs
persist, please open a new issue.
No problem. This issue is still present in all supported OS releases.
Reopening and reassigning to FC4.
This bug does not define what it means with invalid. I'm assuming this is not in:
/usr/share/doc/rpm-4.4.2/GROUPS
Group policy is distribution based and seperate to the mechanism within rpm
itself. External tools such as rpmlint, etc can perform distribution specific
policy checking. Having group validation in rpmbuild is currently not a desired
goal. | https://bugzilla.redhat.com/show_bug.cgi?id=75451 | CC-MAIN-2016-50 | refinedweb | 270 | 55.64 |
How to run external process in Scala and get both exit code and output?
scala execute string as code
show command in scala
sbt run bash command
scala process output
exec scala not found
spark run bash script
scala get current process id
How to call an external process and read both of its exit code and standard out, once it finished?
Using
sys.Process will result in an exception being thrown on different exit code than 0 for success.
Try this:
import sys.process._ val stdout = new StringBuilder val stderr = new StringBuilder val logger = ProcessLogger(stdout append _, stderr append _) val status = "ls -al " ! logger println(status) println("stdout: " + stdout) println("stderr: " + stderr)
Then you got both of them: status, stdout and stderr.
How to execute external commands and use their STDOUT in Scala , How to call an external process and read both of its exit code and standard out, once it Have you looked at Process. exitValue() val output = scala.io.Source. Executing system commands and getting their status code (exit code) It's very easy to run external system commands in Scala. You just need one import statement, and then you run your command as shown below with the "!" operator: scala> import sys.process._ import sys.process._ scala> "ls -al" ! total 64 drwxr-xr-x 10 Al staff 340 May 18 18:00 . drwxr-xr-x 3 Al staff 102 Apr 4 17:58 ..
Have you looked at Process.exitValue?
Returns the exit value for the subprocess.
Scala Standard Library 2.13.2, Use the ! method to get the exit code from a process, or !! to get the standard output from a process. Be aware that attempting to This is Recipe 12.12, “How to execute external commands and use their STDOUT in Scala.” Problem. You want to run an external command and then use the standard output (STDOUT) from that process in your Scala program. Solution. Use the !! method to execute the command and get the standard output from the resulting process as a String.
(I've asked this question on freenode #java and was requested to post here if I found a solution, so here goes)
Simple approach is to use sys.ProcessBuilder:
def RunExternal(executableName: String, executableDir: String) : (Int, List[String]) = { val startExecutionTime = System.currentTimeMillis() val pb : ProcessBuilder = new ProcessBuilder (executableName) pb.directory(new java.io.File(executableDir)) val proc = pb.start() proc.waitFor() val exitCode = proc.exitValue() val output = scala.io.Source.fromInputStream(proc.getInputStream).getLines.toList val executionTime = System.currentTimeMillis() - startExecutionTime logger.info(String.format(s"Process exited with exit code: ${exitCode}.")) logger.info(String.format(s"Process took ${executionTime} milliseconds.")) (exitCode, output) }
scala.sys.process.ProcessBuilder, How to call an external process and read both of its exit code and standard out, get the exit code from a process, or !! to get the standard output from a process. To execute external commands, use the methods of the scala.sys.process package. There are three primary ways to execute external commands: Use the ! method to execute the command and get its exit status. Use the !! method to execute the command and get its output. Use the lines method to execute the command in the background and get its result as a Stream.
12.12. Handling STDOUT and STDERR for External Commands , Running an external command can be as simple as "ls".! , or as complex as building Return status of the process ( ! methods); Output of the process as a String _ // This uses ! to get the exit code def fileExists(name: String) = Seq("test", "-f", name). URL can both be used directly as input to other processes, and java.io.:
12.10. Executing External Commands, One can control where a the output of an external process will go to, and where its input Execute "ls" and assign a `Stream[String]` of its output to "contents". two ProcessBuilder to create a third, the ones that redirect input or output of a blocks until all external commands exit, and returns the exit code of the last one in scala.sys.process package
ProcessBuilder, You want to run an external command and get access to both its STDOUT and STDERR . This Scala shell script demonstrates the approach: Mac OS X (Unix) system, I correctly get the following exit status, STDOUT , and STDERR output: Here's a really simple Scala wrapper that allows you to retrieve stdout, stderr and exit code. import scala.sys.process._ case class ProcessInfo(stdout: String, stderr: String, exitCode: Int) object CommandRunner { def runCommandAndGetOutput(command: String): ProcessInfo = { val stdout = new StringBuilder val stderr = new StringBuilder val
- Thank you; simpler and more Scalaesque than the accepted solution, and worked well for me.
- yes, however, bare Process throws on supposed exitValue being other than 0, I'm not sure whether I can get both output and exit code with it? (I've changed my approach to ProcessBuilder, which solved the issue using the function you suggested) | https://thetopsites.net/article/53775505.shtml | CC-MAIN-2021-31 | refinedweb | 826 | 66.13 |
17 February 2010 22:24 [Source: ICIS news]
HOUSTON (ICIS news)--Mid-cut detergent-range US fatty alcohol spot prices are rising into the high 70s cents/lb on strong demand, driving up second-quarter contract expectations, buyers said on Wednesday.
Buyers confirmed spot prices ranging 71-77 cents/lb ($1,565-1.698/tonne, €1,142-1,240/tonne) were done for February business.
January spot transactions were done broadly within a 62.50-69.50 cents/lb spread, buyers and sellers said. That matched the 62.50-69.50 cents/lb first-quarter contract range, according to global chemical market intelligence service ICIS pricing.
“Second-quarter contracts will probably go up on the higher spot prices,” one buyer said.
The ?xml:namespace>
In the fourth quarter of 2009, US buyers sought more spot alcohol placements in order to keep shorter inventories and to cope with the swift rise in costs of feedstock natural oils such as palm kernel oil (PKO) in
A seller said about 50% of the detergent-range alcohol market was now taking place in the spot arena, a change from the traditional
US-based sellers have also moved away from formal increase announcements, choosing instead to take an account-by-account approach on alcohol prices.
US domestic fatty alcohol producers include Procter & Gamble, Shell and Cognis. Importers include Kao, VVF, Godrej and Musim Mas, among others.
Buyers and end users of mid-cut detergent range alcohols include most major detergent and surfactant producers.
( | http://www.icis.com/Articles/2010/02/17/9335691/feb-us-mid-cut-fatty-alcohol-spot-prices-rise-on-strong-demand.html | CC-MAIN-2014-42 | refinedweb | 248 | 53.21 |
A prime number is a positive integer that is divisible only by
1 and itself. For example: 2, 3, 5, 7, 11, 13, 17
Program to Check Prime Number
#include <stdio.h> int main() { int n, i, flag = 0; printf("Enter a positive integer: "); scanf("%d", &n); for (i = 2; i <= n / 2; ++i) { // condition for non-prime if (n % i == 0) { flag = 1; break; } } if (n == 1) { printf("1 is neither prime nor composite."); } else { if (flag == 0) printf("%d is a prime number.", n); else printf("%d is not a prime number.", n); } return 0; }
Output
Enter a positive integer: 29 29 is a prime number.
In the program, a for loop is iterated from
i = 2 to
i < n/2.
In each iteration, whether n is perfectly divisible by i is checked using:
if (n % i == 0) { }
If n is perfectly divisible by i, n is not a prime number. In this case, flag is set to 1, and the loop is terminated using the
break statement.
After the loop, if n is a prime number, flag will still be 0. However, if n is a non-prime number, flag will be 1.
Visit this page to learn how you can print all the prime numbers between two intervals. | https://cdn.programiz.com/c-programming/examples/prime-number | CC-MAIN-2020-40 | refinedweb | 212 | 71.24 |
I want to add the wordcloud to streamlit app to show thw related words in the tweets.
1 Like
Great question and you absolutely can do this with st.pyplot() and wordcloud. Here’s a simple example:
import streamlit as st from wordcloud import WordCloud import matplotlib.pyplot as plt # Create some sample text text = 'Fun, fun, awesome, awesome, tubular, astounding, superb, great, amazing, amazing, amazing, amazing' # Create and generate a word cloud image: wordcloud = WordCloud().generate(text) # Display the generated image: plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() st.pyplot()
Let me know if that works for you!
3 Likes | https://discuss.streamlit.io/t/how-to-add-wordcloud-graph-in-streamlit/818 | CC-MAIN-2020-34 | refinedweb | 104 | 59.8 |
Lodsys Sues 7 iPhone Devs Over Patent Infringement Claims
timothy posted more than 3 years ago | from the all-very-horatio-alger-villain dept.."
I like big butts and I can not lie (-1, Offtopic)
Anonymous Coward | more than 3 years ago | (#36322940)
This has nothing to do with this thread, but would you just listen to me for a little bit? bringing.
Re:I like big butts and I can not lie (-1)
Anonymous Coward | more than 3 years ago | (#36323752)
Now, I know that there is a lot of embellishment that occurs on this group and I am aware that a small number of things are perhaps sheer fabrication, but I have a story to tell that is the absolute truth. Funniest damn thing that has ever happened to me..
Location, location, location (4, Insightful)
Translation Error (1176675) | more than 3 years ago | (#36322974)
Re:Location, location, location (0)
Anonymous Coward | more than 3 years ago | (#36323034)
can we nuke the thing?
aren't there any rules where you should file?
Re:Location, location, location (1)
Anonymous Coward | more than 3 years ago | (#36323184)
We need to get a bunch of nerds to move to East Texas and try to get on as many patent infringement juries as possible.
Re:Location, location, location (1)
ColdWetDog (752185) | more than 3 years ago | (#36323890)
We need to get a bunch of nerds to move to East Texas and try to get on as many patent infringement juries as possible.
Why the hate? What have nerds done to you that makes you want to punish them in that way?
Re:Location, location, location (1)
s73v3r (963317) | more than 3 years ago | (#36325816)
We're not hating, we're just asking them to make this sacrifice for the greater good. They will be remembered as heroes.
Re:Location, location, location (1)
DJRumpy (1345787) | more than 3 years ago | (#36326192)
...The Greater Good...
Re:Location, location, location (1)
oddaddresstrap (702574) | more than 3 years ago | (#36326252)
We're not hating, we're just asking them to make this sacrifice for the greater good. They will be remembered as martyrs.
FTFY
Re:Location, location, location (1)
Runaway1956 (1322357) | more than 3 years ago | (#36326012)
East Texas is a nice place. I'm trying to think of places I've liked better - ohhhh - the Cascade Mountains, Nova Scotia, Alaska, Montana, Scotland, West Texas, maybe a couple of others. East Texas isn't my first choice for a retirement home, but it comes in a LONG way ahead of Florida, Massachussets, California, Arizona, Virginia, or several other states.
While you're busy hating on Texans, maybe you should look in your own backyard, maybe do a little investigating. There are plenty of douches living in YOUR home town, your county, your state. Or - do you happen to agree with all the laws to which you are subject?
In short, don't be a douche.
Re:Location, location, location (1)
Zomalaja (1324199) | more than 3 years ago | (#36323194)
"Lodsys is a Texas limited liability company with its principal place of business in Marshall, Texas". which is, of all things, in Eastern Texas. What a coincidence!
Re:Location, location, location (1)
s73v3r (963317) | more than 3 years ago | (#36325824)
Most patent troll companies are. I wonder why that is...
Re:Location, location, location (1)
Anonymous Coward | more than 3 years ago | (#36323238)
Countersue... in Nome, Alaska!
Re:Location, location, location (2)
fermion (181285) | more than 3 years ago | (#36324252)
Re:Location, location, location (0)
Anonymous Coward | more than 3 years ago | (#36324550)
That is a state tort reform law, this is a federal lawsuit.. don't think that'll work for patent cases.
Re:Location, location, location (1)
jscotta44 (881299) | more than 3 years ago | (#36325346)
It is a Federal court and not subject to the State of Texas rules on loser pays.
Innovation! (-1)
Anonymous Coward | more than 3 years ago | (#36323000)
Ah yes, Apple, the great innovator.
Re:Innovation! (0)
Kenja (541830) | more than 3 years ago | (#36323060)
Re:Innovation! (0)
revscat (35618) | more than 3 years ago | (#36323156)
Now the only thing left is a homophobic reference about Steve Jobs.
Re:Innovation! (0)
Anonymous Coward | more than 3 years ago | (#36323248)
And we could use a post in which we're informed that Apple users pay a fortune for pretty boxes. Bonus points if they compare their homebuilt gravy-cooled geekbox running a fork of Minix to the type of thing regular and sexually active consumers would want.
Re:Innovation! (0)
Lifyre (960576) | more than 3 years ago | (#36323340)
Have you seen what most people (at least in the USA) eat these days? I hardly think many of them are regular...
Re:Innovation! (-1, Troll)
Sniper98G (1078397) | more than 3 years ago | (#36323392)
Steve jobs is probably to busy sticking an iphone up his gay lover's butt to care about this.
Nailed it!
Re:Innovation! (-1, Offtopic)
StikyPad (445176) | more than 3 years ago | (#36324240)
Idiot mods, look at the context. At best it's offtopic.
Re:Innovation! (0)
Runaway1956 (1322357) | more than 3 years ago | (#36326022)
Homophobic reference to Steve? I would - but I ain't scared of that little queer!
I would hope apple will defend. (0)
jellomizer (103300) | more than 3 years ago | (#36323012)
A loss to developers due to legal actions could cause a chilling effect towards making iOS application. Developers have to go threw the hassle of approval to make sure their product is good. Then pay Apple 20% of their profits and not having legal protection from Apple would make developers go towards android where at least they can get their software out easier.
Re:I would hope apple will defend. (2)
DJCouchyCouch (622482) | more than 3 years ago | (#36323044)
Re:I would hope apple will defend. (3, Interesting)
MightyMartian (840721) | more than 3 years ago | (#36323874)?
Re:I would hope apple will defend. (0)
Anonymous Coward | more than 3 years ago | (#36324802)
> judiciary and the politicians let it get this bad?
Because they aren't affected. Once someone patents "system for searching legal precedents on a mobile device" or "method of communicating with voters via a network" and sues their ass they will wake up.
Re:I would hope apple will defend. (0)
Anonymous Coward | more than 3 years ago | (#36324808)
Uh, the judiciary and the politicians work directly for the very people who want it to be this bad, and even worse.
The rest of us are just profit-fodder that should be forced to accept whatever terms the aristocracy is pleased to present.
While it is very economically healthy to have lots of independent developers producing disruptive new technologies, that same phenomena is very harmful to the producers of the already-established technologies. So the wealthiest among us have every incentive to do everything within their power to prevent everyone else from ever developing something new.
None of this should be surprising to anyone who is paying attention.
Re:I would hope apple will defend. (3, Informative)
Penguinisto (415985) | more than 3 years ago | (#36323074)
Err, before pronouncing doom and gloom upon all things Apple, you may want to think ahead a bit... this patent appears sufficiently broad enough so that Android and WP7 developers may well be next.
Personally, while Apple does need to get it in gear and provide as much aid as possible (can 'tortious interference' be a case here if Apple were indeed to sue Lodsys?), Microsoft and Google may *very* well want to get off their butts and at least start making moves to protect their own dev stables.
PS: Even worst-case, this would be a chilling effect only if your iPhone app included an in-app payment system.
Re:I would hope apple will defend. (1)
TheNucleon (865817) | more than 3 years ago | (#36323912)
"PS: Even worst-case, this would be a chilling effect only if your iPhone app included an in-app payment system."
While I agree with most of your post, I don't agree with this last point. I've been warming up to be an indie dev on mobile devices, and this chills my enthusiasm in a very general way. I don't know when some butt-munch is going to pull a bogus patent out of their pocket and sue me over something that should never even have been granted a patent, let alone cost me legal fees to defend against. It's like a minefield now, and it is really going to be a serious impediment to innovation. We need to collectively tell the government to knock this stuff off, and fast, lest we find ourselves in the technology wastebasket soon.
Re:I would hope apple will defend. (1)
node 3 (115640) | more than 3 years ago | (#36324256)
Life is full of risks. The odds of being hit with a patent lawsuit are low, and generally even if you lose, the impact is minimal (not that I agree with Lodsys at all, but even worst case, they are asking for a very small percentage). After all, what good is a parasite that kills its host? Better to keep it alive to milk indefinitely.
Anyway, my point is if you are shying away from doing something because you *might* meet with adversity, you are doing it wrong.
Re:I would hope apple will defend. (0)
Anonymous Coward | more than 3 years ago | (#36325022)
Exactly. Chinese companies don't deal with this shit. They are free to steal from Western firms without their domestic firms stomping on each other in some rural province.
While the US has the pissing contests, China, Brazil, Russia, and other companies are innovating.
Re:I would hope apple will defend. (1)
shutdown -p now (807394) | more than 3 years ago | (#36325140)
this patent appears sufficiently broad enough so that Android and WP7 developers may well be next.
The question is whether Google and MS will indemnify device manufacturers for the use of their OS.
Re:I would hope apple will defend. (2)
0racle (667029) | more than 3 years ago | (#36323078)
Re:I would hope apple will defend. (1)
hedwards (940851) | more than 3 years ago | (#36323134)
Then what precisely did Google and Apple receive when they paid the licensing fee?
Re:I would hope apple will defend. (1)
QuasiSteve (2042606) | more than 3 years ago | (#36323404)
I don't know about Google, but seeing as Apple won't disclose the terms of the agreement - maybe they're not allowed to - we can only guess.
And as guesses go, this seems reasonable, albeit short-sighted by on the part of Google/Apple if it were so:
Google/Apple acquired a license to use this technology themselves. I.e. if Google added it to their Google Maps app, that's fine. If Apple added it to some iTunes app, that's fine. It's their apps, they have a license to do so.
But if a third party starts using that technology, well that's not Google/Apple using it then, is it? That technology may be encapsulated in the same thing that Google and Apple's own apps might use (e.g. the API), but it's still the third party actually making use of it.
If third party use of the licensed tech is in fact not covered by the agreement, then what they received is something that's gonna hurt one way or another.
Re:I would hope apple will defend. (0)
Anonymous Coward | more than 3 years ago | (#36323954)
It's down to the API. You cannot use this "feature" without their APIs. I'm Apple's case, there is no alternative. Apple have set up their store and provided this mess. Either they're so greedy they don't care, or their lawyers got too cocky like IBM's treatment of Microsoft, or the patents are genuine They need to crush Lodsys in the courts and destroy their portfolio. Google also has a few pennies spare and should join in. Better yet, bribe the government a little more and have software patents ruled invalid. Oh wait, no one will do that, there's too much money involved.
Apple took some apps down instead of defending (0)
FlorianMueller (801981) | more than 3 years ago | (#36323094)
Re:Apple took some apps down instead of defending (1)
Lunix Nutcase (1092239) | more than 3 years ago | (#36323146)
Did Apple also obtain a license on that patent? If not, how is that even remotely analogous to the situation at hand?
Re:Apple took some apps down instead of defending (1)
FlorianMueller (801981) | more than 3 years ago | (#36323216)
Re:Apple took some apps down instead of defending (1)
s73v3r (963317) | more than 3 years ago | (#36325858)
Apple claims the opposite; that their license does cover their developers. So to them, the situations are completely different.
Re:I would hope apple will defend. (1)
hsmith (818216) | more than 3 years ago | (#36323104)
Apple pretty much has to step in at this point. I suspect they will file an injunction of some sort for this suit. The implications will be bad if Lodsys can continue to rape companies.
Re:I would hope apple will defend. (2)
Altus (1034) | more than 3 years ago | (#36323192)
I hope that Apple will step up, but I'm not sure there is anything in the iOS developers agreement that requires them to do so or guarantees any kind of protection against this kind of thing.. If anyone knows of one I would like to see it.
Re:I would hope apple will defend. (1)
node 3 (115640) | more than 3 years ago | (#36324370)
You're right, there is nothing in the agreement that forces Apple to do this. However, you may be surprised to know that corporations always do things that they aren't forced to do. Generally, these would be things that are reasonably seen as "in their own best interest", but they even do things that the people running the company think is "the right thing to do" (there are definitely some industry leaders for whom this phrase is meaningless, but there are undoubtedly more for whom it does come up at least occasionally).
Anyway, in this particular case, Apple has already gone to bat for the developers to a small, but absolutely unrequired extent. If Apple truly believes their license covers third party developers, it's quite likely they will fully step into the legal battle. Maybe the result would be Lodsys loses completely, maybe Apple ends up licensing directly with Lodsys to pay the license directly as part of the 30% cut (the same as they do currently for things like credit card fees).
There are plenty of possibilities, but one thing is certain: it's in Apple's best interest to make sure developers don't have to worry about being gnawed at by third parties for developing for the App Store.
Apple will sue Lodsys (1)
tgibbs (83782) | more than 3 years ago | (#36325174)
This hurts Apple, because Apple gets a cut of in-app sales, and because lawsuits of this kind hurt app development for Apple products, which hurts product sales. So purely from self-interest, it seems virtually certain that Apple will sue Lodsys. If Lodsys actually had a good case, Apple would probably be willing to pay the licensing fee, but Apple's view is that they have already paid it..
Re:Apple will sue Lodsys (1)
node 3 (115640) | more than 3 years ago | (#36325606)
Aside from the last sentence, you just rephrased part of what I wrote..
This, however, I don't think is the case. I don't think this is a quick smash-and-grab. At worst, even assuming that Lodsys doesn't think they have a strong case (and I think they think they do, but who knows?), this is a gambit. The end game is that Lodsys gets a small cut from every in app sale for doing absolutely nothing other than having purchased a patent and filing some lawsuits.
This is their business model. This is the very reason for patent trolls to exist. They are true parasites.
Re:I would hope apple will defend. (1)
Nom du Keyboard (633989) | more than 3 years ago | (#36324604)
I hope that Apple will step up, but I'm not sure there is anything in the iOS developers agreement that requires them to do so or guarantees any kind of protection against this kind of thing.. If anyone knows of one I would like to see it.
Apple had better step in, lest they be sued by their own developers for Fraud for requiring the use of an API that Apple either knew, or should have known, required the payment of undisclosed licensing fees to a third party (Lodsys).
Re:I would hope apple will defend. (1)
alvinrod (889928) | more than 3 years ago | (#36323478)
Also, if Apple initiates a defense, it's likely that Google, Microsoft, and several other companies will also aid in the defense because should Apple lose, they're probably next.
Re:I would hope apple will defend. (1)
node 3 (115640) | more than 3 years ago | (#36324412)
iOS is the larger target. More apps, more users, and disproportionately more revenue in iOS apps. It also makes a bigger splash in the news.
Re:I would hope apple will defend. (1)
GooberToo (74388) | more than 3 years ago | (#36325348)
Which begs the question, why don't they all just get together and pound the shit out of these guys.
Personally, I can't get my brain around the fact something so obvious is patentable in the first place. But beyond that, seems like just about every big player would be waiting in line to kick these guys to the curb if they thought it was the least bit defensible.
Re:I would hope apple will defend. (1)
coinreturn (617535) | more than 3 years ago | (#36323900)
Re:I would hope apple will defend. (1)
Nom du Keyboard (633989) | more than 3 years ago | (#36324638)
You are both wrong. Yes, Apple takes 30% of REVENUE. Not 30% of profit (as original poster stated). Not a bad deal since they are agent.
Not a bad deal for Apple, that is, since most "agents" take 10%, and managers only take 15%.
Re:I would hope apple will defend. (1)
jscotta44 (881299) | more than 3 years ago | (#36325410)
You mean 10% and 15% plus whatever expenses they incur. And, their clients cover all the other costs of their business ventures. Bet when you add all those things up that you come up with a 30% or more of total revenue going out the door.
Re:I would hope apple will defend. (4, Interesting)
dgatwood (11270) | more than 3 years ago | (#36323692).
Not only iOS apps but also Mac and Android (0, Troll)
FlorianMueller (801981) | more than 3 years ago | (#36323066)
Most of the infringement accusations relate to iOS apps, but they also include [blogspot.com] one Mac app (Twitterriffic for Mac) and one Android app (Labyrinth for Android).
Unfortunately, the defense theory communicated by Apple in its letter to Lodsys -- and another theory discussed on the Internet in recent weeks (divided infringement) -- could be wrong [blogspot.com] .
Re:Not only iOS apps but also Mac and Android (0, Troll)
Anonymous Coward | more than 3 years ago | (#36323114)
Disclaimer: Florian Mueller is a well-known troll. He has routinely been wrong on these matters. See: Oracle vs Google over Java patents [slashdot.org] .
I was the first one to debunk the "3 claims" story (0)
FlorianMueller (801981) | more than 3 years ago | (#36323198)
Re:I was the first one to debunk the "3 claims" st (0)
Anonymous Coward | more than 3 years ago | (#36323380)
Yes, you were wrong. Especially with your laughable bullshit over those unit test files that weren't even part of a shipped product that you claimed were great "evidence" of this alleged infringement.
Those files were distributed by device makers (0)
FlorianMueller (801981) | more than 3 years ago | (#36323518)
Re:Those files were distributed by device makers (2)
Lunix Nutcase (1092239) | more than 3 years ago | (#36323884)
Great. Who actually shipped those files? Oh right, no one.
Re:Those files were distributed by device makers (-1, Troll)
FlorianMueller (801981) | more than 3 years ago | (#36323968)
Re:Those files were distributed by device makers (1)
Lunix Nutcase (1092239) | more than 3 years ago | (#36324442)
Once again, point out a phone that actually shipped with those files.
Re:Those files were distributed by device makers (0)
FlorianMueller (801981) | more than 3 years ago | (#36324516)
Re:Those files were distributed by device makers (1)
Lunix Nutcase (1092239) | more than 3 years ago | (#36324706)
Sorry, it's not my job to disprove something you yourself can't even prove. Once you show us an actual phone that shipped with those files we can talk. Until then you're just making shit up.
Re:Those files were distributed by device makers (0)
FlorianMueller (801981) | more than 3 years ago | (#36324748)
Re:Not only iOS apps but also Mac and Android (0)
Anonymous Coward | more than 3 years ago | (#36323610)
Presumably it will affect all applications that allow in appl purchases through a second party? How about the applications companies install on modern TVs and media players like Boxee Box? You can buy content through them, such as movies, TV episodes, music and even games. Even Amazon are doing it via Yahoo widgets.
Contents of Letter to Lodsys from Apple (1)
mandark1967 (630856) | more than 3 years ago | (#36323116)
umad Bro?
Signed,
Steve Jobs
(Sent from my iPhone)
A humble proposal (1, Offtopic)
straponego (521991) | more than 3 years ago | (#36323158)
Re:A humble proposal (1)
immakiku (777365) | more than 3 years ago | (#36323616)
Re:A humble proposal (1)
Nom du Keyboard (633989) | more than 3 years ago | (#36324674)
I'll respond to your proposal directly and point out the flaws. Patent trolls only exist because a) the patent system exists and b) it's not possible to differentiate the "trolls" from the legitimate inventors. Either you solve b) and remove this problem of trolls altogether, or any solution targeting patent trolls would altogether undermine a). So in short, your solution is the roundabout way of removing the patent system.
Or you apply the Troll Test. Is the potential Troll actually using the patented item. No = Troll.
Re:A humble proposal (1)
s73v3r (963317) | more than 3 years ago | (#36325916)
> it's not possible to differentiate the "trolls" from the legitimate inventors.
Sure it is. Did you actually invent the item in question? Probably not a troll. If you bought the patent, are you actually using it in one of your products? Oh, you don't have any products? Then you're definitely a troll.
Re:A humble proposal (0)
Anonymous Coward | more than 3 years ago | (#36324280)
Dude, perhaps you should lay off the drugs.
Patent system broken (0)
msobkow (48369) | more than 3 years ago | (#36323178)
Everyone knows it, but I'll state the obvious: The US patent system is badly broken.
Re:Patent system broken (0)
Nom du Keyboard (633989) | more than 3 years ago | (#36324692)
Everyone knows it, but I'll state the obvious: The US patent system is badly broken.
Mod Obvious -1.
I can't take it anymore (1)
Derekloffin (741455) | more than 3 years ago | (#36323180)
Re:I can't take it anymore (0)
Anonymous Coward | more than 3 years ago | (#36323486)
Software Patent, maybe even patents in general HAVE TO GO! I don't think a single week passes without me hearing about yet another stupid patent suit. They've long since outlived your usefulness.
No. patents have a function. Frivolous patent suits do not. There need to be significant penalties for these. They also need to reexamine the use of patents for generic ideas as opposed to specific inventions and methods.
Lets not forget... (4, Informative)
thestudio_bob (894258) | more than 3 years ago | (#36323214)
Lodsys Sues 7 iPhone Devs and 1 Andriod Dev Over Patent Infringement Claims..
Lodsys sues 7 app developers in East Texas, disagrees with Apple; Android also targeted [blogspot.com]
Re:Lets not forget... (1)
Anonymous Coward | more than 3 years ago | (#36325638)
The punchline of course is nobody cares about the Android devs...
Patent Office (1)
Anonymous Coward | more than 3 years ago | (#36323232)
I made the mistake of trying to read one of the patents. I have new-found respect for the staff employeed by the USPO. Good heavens, how someone can make a living reading these on a daily basis is beyond me.
Re:Patent Office (1)
God'sDuck (837829) | more than 3 years ago | (#36323580)
I made the mistake of trying to read one of the patents. I have new-found respect for the staff employeed by the USPO. Good heavens, how someone can make a living reading these on a daily basis is beyond me.
I don't feel too bad...failure to apply in plain English (except for technical details of a specific implementation) should be immediate grounds for dismissal of a patent. That you can apply with garbage like this and get a patent is the result of precedent. That said...that's the fault of management, not the poor kids fresh out of law school working there until they can find a better job.
Re:Patent Office (0)
Anonymous Coward | more than 3 years ago | (#36323596)
Hi,
I'm an examiner at the USPTO and I would like to share with you our methods for reviewing and approving (or sometime denying) patents. While they are often lengthy and obfuscated, our extensive experience can cut through the legalese and get straight to the nuts and bolts.
First, we divide examiners into areas of expetise. We have many engineers from various disciplines: electrical; structural; mechanical; civil; etc. Water experts. Software experts. Hardware experts. Cooling systems. Heating systems. Automotive, aeronautic, nautical. Bridge builders and road builders and widget makers. You get the idea.
So we divide them up into areas and we convene committees to first give each patent a cursory review, to ensure it meets the most basic criteria that all patents must. You are probably familiar with them: novel, non-obvious, useful. Patents are collected and brought before the committee composed of experts in that particular area, projected on large screens by a clerk, who reads each abstract aloud before distributing the full paper copies.
During these committee meetings we expert examiners often play cards, sleep, eat sandwiches, go for a walk. Things like that. I'm posting on Slashdot right now! We figured out a long time ago that no one pays any attention until after we issue, and then it becomes a matter for the courts. Just like this Lodsys thing! It will all work out fine in the end.
Funny story: during the examination of these two Lodsys I was playing Pocket God on my iPhone!
Re:Patent Office (1)
bjwest (14070) | more than 3 years ago | (#36324306)
Sounds like the way Congress and the House operate - no one pays any attention (their corporate handlers have already told them how to vote) and it takes an hour or so for them to waddle their ass over to their desk to press a 'yea' or 'nae' button.
Re:Patent Office (1)
Dachannien (617929) | more than 3 years ago | (#36325764)
Good heavens, how someone can make a living reading these on a daily basis is beyond me.
Duct tape: apply directly to the forehead.
terrible (1)
pak9rabid (1011935) | more than 3 years ago | (#36323240)
Re:terrible (1)
mr_lizard13 (882373) | more than 3 years ago | (#36323892)
Re:terrible (0)
Nom du Keyboard (633989) | more than 3 years ago | (#36324702)
What a Lodsys of bullshit. I hope these guys die horrible deaths.
Mod Popular +1.
Lodsys '078 is a classic submarine patent (1)
doperative (1958782) | more than 3 years ago | (#36323306)
"The '078 is the modern day version of a submarine patent, the claims morphing over more than a decade through a CIP and multiple continuations, most of which were abandoned along the way" link [applepatent.com]
." link [uspto.gov]
Even if Apple isn't bound to step in (1)
shacky003 (1595307) | more than 3 years ago | (#36323314)
If they stand by and do nothing, it will spook the future developers from creating iOS apps.
Why wouldn't they defend the devs when it could mean issues later on for the profit machine?
Will Apple? OF COURSE! (1)
erroneus (253617) | more than 3 years ago | (#36323464)
This is a no-brainer. Even if this was Microsoft instead of Apple, the big company that depends on its developers to enrich their products absolutely had to defend its developers whenever and however possible,
To take the view that Apple is too "negative adjective" to do the "right thing" would be absurd. They have their own interests to protect and they most certainly will. If they failed to do that, you can expect a massive drop in quality, enthusiasm and number of developers for Apple's platforms. This, in turn, would spell quiet disappointment in the user community and only the long-term, hard core fans would remain while all the latest and greatest things would be arriving in Android or even Windows.
To which anti-patent organization should I donate? (1)
ChangeOnInstall (589099) | more than 3 years ago | (#36323704)
I've been strongly offended by software patents ever since I learned over a decade ago about how meager the "innovations" they protect can be. I think most of us will make one or two "patentable innovations" per day before lunch, or at least infringe with some fundamental task like throwing an exception (never realizing we were "innovating" or "infringing" in the process).
So where should we send the money? I want to donate to an org that shares my opinions and is doing something about it. The two I know of are as follows, but would appreciate additional suggestions.
EFF Patent Busting Project: [eff.org]
End Software Patents: [endsoftpatents.org]
Why this is more an issue for iOS than Android (0)
Anonymous Coward | more than 3 years ago | (#36323748)
The developers are using prescribed, Apple-provided APIs and are barred by Apple from implementing alternatives.
Take it to Congress (3, Insightful)
d3xt3r (527989) | more than 3 years ago | (#36323774).
Microsoft behind this? (0)
Anonymous Coward | more than 3 years ago | (#36323836)
I'm beginning to wonder if Microsoft is behind this. It's not unusual for Micorosoft to use another company to attack competition using "IP" as weaponary.
Re:Microsoft behind this? (1)
Kalriath (849904) | more than 3 years ago | (#36325650)
Considering their developers are being threatened too, I find that highly unlikely.
If Apple does start to defend iOS developers (1)
lpp (115405) | more than 3 years ago | (#36324130)
I suspect one concern Apple might have is what effect attempting to defend iOS developers might have. If they stay out of it, patent trolls like Lodsys will obviously continue to go after potentially infringing small fries in the hopes of browbeating them into settling. But if they enter the fray, it might set a precedent which could pull them into other infringement cases that they might feel less comfortable fighting. At some point they're going to have to draw a line and say either that they will pick and choose which infringement cases they will help defend or try to delineate some rules to be able to predict such situations. Either way has its drawbacks.
And of course, they may still opt out of defending. Sure, they wrote a strongly worded letter, but they still have yet to actually put a lawyer in a courtroom or at a negotiating table, on behalf of an unaffiliated iOS developer.
Boo! (1)
redshirt (95023) | more than 3 years ago | (#36324500)
Boo to the Eastern District Courts of Texas!
Indispensable Parties (4, Interesting)
Nom du Keyboard (633989) | more than 3 years ago | (#36324526).
Interplead Apple (1)
pacergh (882705) | more than 3 years ago | (#36324566)
If they don't join the suit the developers should interplead them in.
That's patentable? (1)
BurzumNazgul (1163509) | more than 3 years ago | (#36324670)
Flush the patent system; it's a turd. | http://beta.slashdot.org/story/152680 | CC-MAIN-2014-41 | refinedweb | 5,383 | 69.92 |
Python time for Performance Measurement
00:00
In this last lesson, I’m going to show you a couple of useful functions that the Python
time module provides you for working with the performance of your code, measuring it and modifying it as necessary.
00:13
The two functions that I’ll be showing you are as follows.
time.perf_counter(), which is essentially a super precise little stopwatch, and just like a stopwatch, the
perf_counter doesn’t actually give you a real time, per se.
00:29 It doesn’t give you the time of day that it is, but what it does do is gives you a super precise time in floating-point seconds that you can then use to measure small increments of time.
00:40 So just like a stopwatch, you might click the start and then see that 10.5 seconds have passed, but you don’t know anything about the real time of day it is.
00:49
You don’t know whether those 10 seconds were from 3:00 PM to 3:00 PM and 10 seconds or midnight to midnight and 10 seconds. So that’s what
perf_counter() does, and then
sleep() allows you to pause the execution of a thread of a Python program for the specified number of seconds.
01:05 And so this is really useful for things like rate limiting, and any application where you need things to happen over sustained intervals of time rather than as fast as possible.
01:16 Let’s look in the REPL and see how these work in practice.
01:20
What I want to show you first is just the bare output of
perf_counter(). And as you can see, there’s a nanoseconds version as well.
01:29
This time output is obviously not the same as
time.time(), so it doesn’t represent the time since the epoch. What it represents instead is the very precise time since a recent fixed point, which means that it’s useless to determine the date, but what it’s really useful for is to say something like
a = time.perf_counter(), and then shortly afterward, you could say
b = time.perf_counter(), and then you can say
b - a and get a very precise account of the time delta
02:02 between those things. So it’s used for measurements between recent intervals, instead of being used as a way to reckon from some always-unchanging point.
02:13
What you should use this for is, instead, short distances between two things. You can see this in action, how it might be useful for functions by saying something like
def test().
02:24
And then what you can do there is you can start at the beginning of the function, you can start a
perf_counter, then you can do some kind of unexciting work or very exciting work, depending on what you’re doing.
02:37
I’ll just do, maybe, 10 million times
x = x + 1. And then at the end, you can just return the end
perf_counter minus the start.
02:48 Then when you call it, it’ll hang for a little bit and then it’ll tell you that this took exactly, or not perfectly exactly, but very, very close to precisely, this amount of time: 0.73995 seconds.
03:01 So that’s really useful because there are a lot of applications where you need, really, precision timing. You know, science, engineering, all those sorts of things you require the time that is this precise.
03:11
So that’s really useful. Another function that’s useful is
time.sleep(). And it does exactly what you would think it does: it makes the program essentially go to sleep for however many seconds you pass in, and that just means it’s not doing anything for this time.
03:25 So if I pass it in 5, you can count down from 5 and it’ll just essentially do nothing for that time. Why is this useful? Well, there are a lot of occasions where it’s useful to do nothing for specified times, right? So think about rate limiting, for example.
03:41 If you’re querying some kind of server, you probably don’t want to always be querying that server. You want to give some gaps in there to allow your program to do other things or simply to allow the server to be able to give you that information and not have its resources constantly tied up by giving you back the information you’re requesting. So that’s one example, but there are many, many others where sleeping is actually something quite useful.
04:06
So I want to define one more test function, which is just
test_sleep, and I’ll say, let’s give it a parameter called
x, which will just be the number of seconds.
04:15
And then I’m actually gonna use
perf_counter() again, just to make sure that this thing is sleeping for the right amount of time. So you can say
time.sleep(x) and then just return, again, the
perf_counter minus the start.
04:29
And so you can run that and let’s give it, maybe, 5 seconds and as you can see, it will sleep for that amount of time and then it will return the
perf_counter, which, as you can see, is pretty much infinitesimally close to 5 seconds.
04:43
So some of that variation might just be the time it takes to call
perf_counter or other system factors that make this run a little bit longer than exactly 5 seconds. But as you can see, it’s vanishingly small, the actual difference.
04:56
So that’s how
time.sleep() works. It’s really simple, but it can be useful in a lot of surprising ways. I can’t tell you the number of times where I’ve had an application where I’ve said, “Oh, why is this performing so badly?” Well, it’s because it’s querying some server and I’m not giving it enough time to, kind of, react. Sleep can be very helpful in unexpected ways, so it’s good to know.
05:16
These are just a couple of the interesting functions that the
time module offers. I thought that they would be kind of fun and might be useful in your own code.
05:23 So take a look at the rest of the module’s documentation, if you’re looking for more interesting ways to interact with time and dates. Next up is the conclusion.
@vincentstclair The reference point (i.e. time represented by 0) is undefined for
perf_counter(): docs.python.org/3/library/time.html?highlight=perf_counter#time.perf_counter
This means that
perf_counter() can not be used to figure out the time of day (you can use
time.time() for that, although the
datetime library is usually a better option), but it’s very useful for measuring time intervals.
We have some more information about the different timer functions in Python Timers: realpython.com/python-timer/#using-alternative-python-timer-functions
Well said, @Geir Arne Hjelle! A salient detail from the docs is that
perf_counter() uses the most precise possible clock on your system, so the reference point can’t be defined in a system-independent way.
Become a Member to join the conversation.
vincentstclair on Aug. 11, 2020
What does the number of seconds from the bare output of perf_counter() represent?
You:
Me:
What is this measuring, and how is it measuring it? What fixed point is it measuring from? | https://realpython.com/lessons/time-performance/ | CC-MAIN-2021-17 | refinedweb | 1,276 | 69.82 |
okay guys im really stuck on this and i need help badly. All i know is that i need a loop at the begining to ask for four different marks, these marks have to be between 0-100. the marks are then stored in array and then i do a sum such as SAR EAX,2 to work out the average mark.
If you guys could help that would be great!
Hees some code snippets i have already
#include <stdio.h> #include <stdlib.h> #include <windows.h> #include <stdafx.h> int main (void) { int mark1[] = "Enter the Mark for Module:"; char average[] = "The average mark is:"; char format[] = "%d"; int myarray[4]; myarray[0] = test1; myarray[1] = test2; myarray[2] = test3; myarray[3] = test4; _asm { mov ecx,4 mov eax,0 mov ebx,0 Loop1: add eax,myarray[ebx] add ebx,4 loop Loop1 ... } } Sleep(99999999); return 0; }
Thanks,
~Crag | https://www.daniweb.com/programming/software-development/threads/328944/help-arrays-and-loops | CC-MAIN-2018-39 | refinedweb | 150 | 91.92 |
1. Learning driver development is hard
The Windows driver environment has a steep learning curve and evolves quickly. A new driver developer needs professional training. An experienced developer should go to conferences and read articles and whitepapers to stay on top of the improvements.
2. Driver development is software engineering
Good driver development uses software engineering principles. A developer cannot just throw some code into a sample driver and expect to produce a high quality driver. A driver needs specifications for functionality and testing. Coding a driver should reflect good development practices.
3. It is not just the Win9x computer model anymore
Too many times developers ignore the scope of the Windows system when developing a driver. Windows drivers and their supporting programs must consider 64-bit processors, multi-processing, memory greater than 4 GB, headless systems, multi-user systems, and hot-plug devices.
4. Use the latest tools
Microsoft constantly upgrades the DDK with new samples, documentation, and tools. Additionally, each new version of Windows has additional validation for drivers. Use the latest DDK and test on the latest OS.
5. Find bugs at compile time
A driver should compile cleanly with /W4 and PREfast before being loaded for testing. When appropriate, enable DEPRECATE_DDK_FUNCTIONS. Drivers should use the C_ASSERT macro to test compile time conditions. Consider using Lint for further checks. Finally, run all INF files through ChkInf.
6. Take advantage of runtime checking
The driver should run cleanly under the Driver Verifier with all options enabled, Call Usage Verifier, and the Checked Build of Windows. A good driver will have extensive ASSERTs and conditional validity checks built into it.
7. No one can test too much
Allot more time for testing than for development. A driver needs testing at many levels: incorporate unit testing, driver tests, and system level testing. Take advantage of the tests from Microsoft, but plan to write driver-specific tests.
8. Use profiling, code coverage, and code reviews
Profile the completed driver and correct any hotspots. Use a code coverage tool and then inspect the output to see if all significant code was tested. Finally, using the data from profiling and coverage, hold a code review to find potential problems before the driver gets to the customer.
9. Plan for maintenance and modifications
Take advantage of WPP software tracing to provide diagnostic capabilities in the driver. Consider a WinDBG extension to display complex data structures in a driver. Finally, recognize that the actual driver implementation needs to be documented for support.
10. Bugs will happen
Even the best developers have bugs. Make sure that the development team whose code caused a problem knows of the problem so it does not happen again. Finally, driver developers leverage existing code. Be sure that you fix the bug in all drivers based on that code.
This is a pipe dream, of course. Every company believes their world-changing hardware is completely unique and worthy of protection under lock and key, but it is a shame nonetheless. Windows DDK programming is very much a "word of mouth" adventure: "Fred was able to get streaming working by connecting his pins to the KsFrammis filter; we should try it." Without the operating system source code as an ultimate reference, we are forced to use trial-and-error to try to understand what lies beneath. Because driver source is so well-protected, we all have to rediscover this information ourselves, over and over. That's an enormous waste of resources.
Isolating the origin of a bug is one of the most challenging tasks for a driver writer. However, it turns out that most bugs result from invalid assumptions that the programmer made about the state of the driver at certain code points.
Unfortunately, unusual driver states usually do not immediately lead to remarkable system behavior. It may take a while before such a bug takes effect. At the time the system crashes, the code originally causing the bug might no longer be executing.
I believe one can easily identify the bigger part of all driver bugs from debug output that the driver prints whenever it detects an unusual situation. You should therefore place sanity checks in your code whenever some state change might have happened. For instance:
A well-known technique for adding sanity checks is the ASSERT() macro. However, ASSERT() will break into the debugger when the assertion fails, and also only applies to checked driver builds and is thus not available for production drivers.
Of course, for performance reasons I also do not want all sanity checks to be present in a production driver. But I still want some of them. I therefore use macros for all debug output, which I can expand to white space on a free build, for instance:
if ((pVirt = MmAllocateNonCachedMemory(NumberOfBytes)) == NULL) { WARNING(("Out of memory (size=%ul).\n", NumberOfBytes)); return (NULL); } VERBOSE(("Successfully allocated %ul bytes.\n", NumberOfBytes));
If I want VERBOSE() to be a no-op in my production driver, I define the macros as follows:
#ifdef DBG #define VERBOSE(x) DbgPrint x #else #define VERBOSE(x) #endif #define WARNING(x) DbgPrint x
Notice the missing parentheses for the call to DbgPrint() here; they are provided as the extra parentheses around the message text and parameters in the example above.
For a more detailed example of how debug macros can be defined efficiently, see the "…\inc\…\minidrv.h" header file in the DDK. | http://www.microsoft.com/whdc/resources/mvp/xtremeMVP_drv.mspx | crawl-002 | refinedweb | 901 | 54.93 |
Implementing Google Analytics and Google Tag Manager into a React JS App
.
Terminology
- SPA — Single Page Application
- React — React JS for building SPAs
- GA — Google Analytics, now Google Universal Analytics a.k.a. “GUA”
- GTM — Google Tag Manager
GTM Container Code
The first step is to add the GTM container to the React app. The GTM container code can either be added directly to your index.html or via a package.
Method 1 (preferred): Add GTM Container Code via Package
For this method, we will need a handy package, react-gtm-module. Start by installing the package.
npm i react-gtm-module
To initialize GTM with this method, we need to include the package in our app.js and provide a GTM ID. Remember to swap in your own
gtmId.
import TagManager from 'react-gtm-module'const tagManagerArgs = {
gtmId: '<YOUR GTM ID>'
}TagManager.initialize(tagManagerArgs)
Method 2: Add GTM Container Code via Index.html
The index.html can be found at projectName/public/index.html within your working directory.
If you don’t already have the code then you can copy it from the Admin > Install Google Tag Manager section in GTM. You will need GTM admin access to get it or you can ask your GTM admin to get the code for you. Then we simply follow the instructions provided by Google:
Paste this code as high in the <head> of the page as -->
Additionally, paste this code immediately after the opening <body> tag:
<!-- Google Tag Manager (noscript) -->
<noscript><iframe src="<YOUR GTM ID>"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) -->
In a pinch, you can take my code snippets above and swap out “<YOUR GTM ID>” for your own GTM ID.
Once complete, your index.html should look something like this.
<
<title>React App</title>
<!-- -->
</head>
<body>
<!-- Google Tag Manager (noscript) -->
<noscript><iframe src="<YOUR GTM ID>"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) -->
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>
Validate GTM Container Implementation
We can run a quick test in dev tools to check if the container code is coming in okay. Open up dev tools and look for gtm.js.
If at this point you are getting a 404 from gtm.js, double-check your GTM ID and make sure that you have published at least once. A new container will return a 404 if the container has never been published.
Page Tracking
Method 1: With routing and <head> management
This method assumes that you use something like react-router-dom and react-helmet-async, which means that your page URL and page titles are being updated between components. Other routers and <head> management packages will work here as well.
To signal to DTM that we would like to track a new page we can use an event like this:
window.dataLayer.push({
event: 'pageview'
});
For example, your App.js might look like this:
import React from 'react';
import TagManager from 'react-gtm-module'const tagManagerArgs = {
gtmId: '<YOUR GTM ID>'
}TagManager.initialize(tagManagerArgs)function App() {
window.dataLayer.push({
event: 'pageview'
});
return (
<div className="App">
<header className="App-header">
<p>
Analyst Admin
</p>
Learn React
</header>
</div>
);
}export default App;
Method 2: Without routing and <head> management
This method is very similar to the first, the only difference is that you can pass the page URL and Title that you want.
window.dataLayer.push({
event: 'pageview',
page: {
url: location,
title: title
}
});
url — The URL of the page that you wish to track. In a later step, we will set this up to go into the “location” custom field of the GA tag, which will in turn put it into the “dl” query parameter. I recommend putting in a full URL so that it matches the non-SPA page tracking. Google uses the following formula to get the URL: document.location.origin + document.location.pathname + document.location.search. GA documentation.
title — The title of the page. In a later step, we will set this up to go into the “title” custom field of the GA tag, which will in turn put it into the “dt” query parameter. GA documentation.
Page Tracking Validation
Set up page tracking with either of the two methods above and load a test page. In the console, search for the global dataLayer variable and expand the array. You should see an element for “pageview”.
If you don’t see the “pageview” element — check that you are inserting the “pageview” element into
window.dataLayer and that your data layer name matches across your code.
Event Tracking
In GA event tracking, we work with four variables: category, action, label, and value. You can read more about it here. To track events in React we need to push a new element into the data layer that contains the variables that we want. For example:
window.dataLayer.push({
event: 'event',
eventProps: {
category: category,
action: action,
label: label,
value: value
}
});
GTM is flexible enough to allow for different key names and object structures. Additionally, GTM can transform the variables that we send in and can even fill in any gaps in data. However, to keep things simple, we’re just going to give it the variables in plain English so the mapping in GTM becomes trivial.
Event Tracking Validation
Similar to page validation, we can validate the event insertion into the data layer by checking the console.
This wraps the JavaScript side. Now we can switch over to GTM set up.
GTM Set Up
Triggers — set up the triggers which are going to listen for our events. For example
event: 'pageview' and
event: 'event'.
Next, set up the variables for page tracking and event tracking. The page.url will come from the
page.url data layer variable as you may have guessed. We will also set a default value in case that
page.url is not sent in with the dataLayer.push event.
Similar to page.url, we will also need a page.title variable and set a default value for it.
If you use a Router with your react app then the URL will be updated automatically, in this case you don’t need to send in page.url. Instead, we can have GTM pull the URL. You can use the code below to manually get the URL that the GA library would normally get.
function () { return document.location.origin + document.location.pathname + document.location.search; }
We can use a similar approach to get the
document.title if the page.title variable is not sent in with dataLayer.push.
The final four variables to set up are for event tracking. I’ve only included category below. However, you will need to create variables for all four: category, action, label, and value.
The last part of the GTM set up is setting up the tags. You will need to check off “Enable overriding settings in this tag” to get access to Fields to Set. Then set location and title as shown below. Also, please note that the trigger is the event page view that we created earlier — that way the tag doesn’t fire on page load, only when an event is pushed into the data layer.
Set up the Event Tracking tag similarly to the Page View Tag:
Final Validation
To check for proper page tracking on the browser, open up dev tools, search network requests for “google-analytics” and click on the /collect request.
For event tracking validation, trigger your event and look for the event analytics HTTP request.
That’s it! You should have a basic, but fully functional GA+GTM implementation on your React App. Let me know in the comments if you have any questions, comments, or suggestions.
Originally published at on June 21, 2020. | https://analystadmin.medium.com/implementing-google-analytics-and-google-tag-manager-into-a-react-js-app-e986579cd0ee?readmore=1&source=---------6---------------------------- | CC-MAIN-2021-21 | refinedweb | 1,310 | 65.73 |
Next: Using Serveez, Up: (dir) [Contents][Index]
This manual documents GNU Serveez 0.2.2, released 9 November 2013..
We know, you usually don’t read the documentation. Who does. But please, read at the very least this chapter. It contains information on the basic concepts. Larger parts of the manual can be used as a reference manual for the various servers.
Next::
Next:.
Next: The config file, Previous: Starting Serveez, Up: Using Serveez [Contents][Index]
-h, --help
Display this help and exit.
-V, --version
Display version information and exit.
-L, --list-servers
Display prefix and description of each builtin server, one per line, and exit. If Serveez was configured to include the Guile server (see Build and install), the output includes an additional line:
(dynamic) (servers defined in scheme)
-i, --iflist
List local network interfaces and exit.
-f, --cfg-file=FILENAME
File to use as configuration file (serveez.cfg).
-v, --verbose=LEVEL
Set level of logging verbosity.
-l, --log-file=FILENAME
Use
FILENAME for logging (default is stderr).
-P, --password=STRING
Set the password for control connections. This option is available only if the control protocol is enabled. See Control Protocol Server.
-m, --max-sockets=COUNT
Set the maximum number of socket descriptors.
-d, --daemon
Start as daemon in background.
-c, --stdin
Use standard input as configuration file.
-s, --solitary
Do not start any builtin coserver instances.
Previous:.
Next: Define servers, Up: The config file [Contents][Index]
A
port (in Serveez) is a transport endpoint. You might know them
from other TCP or UDP server applications. For example: web servers
(HTTP) usually listen on TCP port 80. However, there is more than TCP
ports: we have UDP, ICMP and named pipes each with different options to
set. Every port has a unique name you assign to it. The name of the port is
later used to bind servers to it.
The following examples show how you setup different types of port
configurations. You start to define such a port using the procedure
define-port!. The first argument specifies the
name of the port configuration. The remaining argument describes the
port in detail.
This table describes each configuration item for a port in Serveez. Note that not each item applies to every kind of port configuration.
proto (string)
This is the main configuration item for a port configuration setting up the type of port. Valid values are ‘tcp’, ‘udp’, ‘icmp’, ‘raw’ and ‘pipe’. This configuration item decides which of the remaining configuration items apply and which do not.
port (integer in the range 0..65535)
The
port item determines the network port number on which TCP and UDP
servers will listen. Thus it does not make sense for ICMP and named pipes.
If you pass ‘0’ Serveez will determine a free port in the range
between 1 and 65535.
recv (string or associative list)
This item describes the receiving (listening) end of a named pipe
connection, i.e., the filename of a fifo node to which a client can
connect by opening it for writing. Both the
recv and
send
item apply to named pipes only. The value can either be an associative
list or a simple filename. Using a simple filename leaves additional
options to use default values. They deal mainly with file permissions
and are described below.
send (string or associative list)
This item is the sending end of a named pipe connection. It is used to send data when the receiving (listening) end has detected a connection. The following table enumerates the additional options you can setup if you pass an associative list and not a simple filename.
name (string)
The filename of the named pipe. On Windows systems you can also specify the hostname on which the pipe should be created in the format ‘\\hostname\pipe\name’. By default (if you leave the leading ‘\\hostname\pipe\’ part) the pipe will be created on ‘\\.\pipe\name’ which refers to a pipe on the local machine.
permission (octal integer)
This specifies the file permissions a named pipe should be created with. The given number is interpreted in a Unix’ish style (e.g., ‘#o0666’ is a permission field for reading and writing for the creating user, all users in the same group and all other users).
user (string)
The file owner (username) of the named pipe in textual form.
group (string)
The file owner group (groupname) of the named pipe in textual form. If this item is left it defaults to the file owner’s primary group.
uid (integer)
The file owner of the named pipe as a user id. You are meant to specify
either the
uid item or the
user item. Serveez will
complain about conflicting values.
gid (integer)
The file owner group of the named pipe as a group id. This item
defaults to the file owner’s primary group id. You are meant to specify
either the
gid item or the
group item. Serveez will croak
about conflicting values.
ipaddr (string)
This configuration item specifies the IP address (either in dotted decimal form e.g., ‘192.168.2.1’ or as a device description which can be obtained via ‘serveez -i’) to which a server is bound to. The ‘*’ keyword for all known IP addresses and the ‘any’ keyword for any IP address are also valid values. The default value is ‘*’. The configuration item applies to network ports (TCP, UDP and ICMP) only.
device (string)
The
device configuration item also refers to the IP address a server
can be bound to. It overrides the
ipaddr item. Valid values are
network device descriptions (probably no aliases and no loopback devices).
It applies to network ports (TCP, UDP and ICMP) only.
A note on device bindings: Device bindings are based on the
SO_BINDTODEVICE socket layer option. This option is not available
on all systems. We only tested it on GNU/Linux (2.2.18 and 2.4.17 as of
this writing). Device bindings are very restrictive: only root can do it
and only physical devices are possible. The loopback device cannot be used
and no interface alias (i.e., ‘eth0:0’). A device binding can only
be reached from the physical outside but it includes all aliases for the
device. So if you bind to device ‘eth0’ even ‘eth0:0’ (and all
other aliases) are used. The connection has to be made from a remote
machine. The advantage of this kind of binding is that it survives
changes of IP addresses. This is tested for ethernet networks (i.e., eth*)
and isdn dialups (i.e., ippp*). It does not work for modem dialups
(i.e., ppp*) (at least for Stefan’s PCMCIA modem). The problem seems to be
the dialup logic actually destroying ppp*. Other opinions are welcome.
Device bindings always win: If you bind to ‘*’ (or an individual IP
address) and to the corresponding device, connections are made with
the device binding. The order of the
bind-server! statements
do not matter. This feature is not thoroughly tested.
backlog (integer)
The
backlog parameter defines the maximum length the queue of
pending connections may grow to. If a connection request arrives with the
queue full the client may receive an error. This parameter applies to
TCP ports only.
type (integer in the range 0..255)
This item applies to ICMP ports only. It defines the message type identifier used to send ICMP packets (e.g., ‘8’ is an echo message i.e., PING).
send-buffer-size (integer)
The
send-buffer-size configuration item defines the maximum number
of bytes the send queue of a client is allowed to grow to. The item
influences the “send buffer overrun error condition”. For packet oriented
protocols (UDP and ICMP) you need to specify at least the maximum number
of bytes a single packets can have. For UDP and ICMP this is 64 KByte.
The value specified here is an initial value. It is used unless the
server bound to this port changes it.
recv-buffer-size (integer)
The
recv-buffer-size configuration item defines the maximum
number of bytes the receive queue of a client is allowed to grow to.
The item influences the “receive buffer underrun error condition”. The
value specified here is an initial value. It is used unless the server
bound to this port changes it.
connect-frequency (integer)
This item determines the maximum number of connections per second the port will accept. It is a kind of “hammer protection”. The item is evaluated for each remote client machine separately. It applies to TCP ports.
allow (list of strings)
Both the
allow and
deny lists are lists of IP addresses in
dotted decimal form (e.g., ‘192.168.2.1’). The
allow list defines
the remote machines which are allowed to connect to the port. It applies
to TCP ports.
deny (list of strings)
The
deny list defines the remote machines which are not allowed to
connect to the port. Each connection from one of these IP addresses will
be refused and shut down immediately. It applies to TCP ports.
Definition of a TCP port configuration with the name
foo-tcp-port. The
enhanced settings are all optional including the ipaddr property
which defaults to ‘*’. The ipaddr item can contain any form
of a dotted decimal internet address, a ‘*’, ‘any’ or an
interface description which you can obtain by running ‘serveez -i’.
(define-port! 'foo-tcp-port '( ;; usual settings (proto . tcp) ;; protocol is tcp (port . 42421) ;; network port 42421 (ipaddr . *) ;; bind to all known interfaces (device . eth0) ;; bind to network card ;; enhanced settings (backlog . 5) ;; enqueue max. 5 connections (connect-frequency . 1) ;; allow 1 connect per second (send-buffer-size . 1024) ;; initial send buffer size in bytes (recv-buffer-size . 1024) ;; initial receive buffer size in bytes ;; allow connections from these ip addresses (allow . (127.0.0.1 127.0.0.2)) ;; refuse connections from this ip address (deny . (192.168.2.7)) ))
Definition of a pipe port configuration with the name
foo-pipe-port.
When bound to a server it creates the receiving end and listens on that.
If some client accesses this named pipe the server opens the sending end
which the client has to open for reading previously.
The only mandatory item is the file name of each pipe. If you want to specify a user creating the named pipe (file ownership) use either the user or the uid setting. Same goes for the items group and gid.
(define-port! 'foo-pipe-port `( (proto . pipe) ;; protocol is named pipe ;; specify the receiving endpoint (recv . ((name . ".foo-recv") ;; name of the pipe (permissions . #o0666) ;; create it with these permissions (user . "calvin") ;; as user "calvin" (uid . 50) ;; with the user id 50 (group . "heros") ;; which is in the group "heros" (gid . 100))) ;; with the group id 100 ;; specify the sending endpoint (send . ((name . ".foo-send") (permissions . #o0666) (user . "hobbes") (uid . 51) (group . "stuffed") (gid . 101))) ))
Define an ICMP port configuration which will accept connections from the
network interface ‘127.0.0.1’ only and communicates via the message
type 8 as described in the Tunnel Server chapter. The name of
this port configuration is
foo-icmp-port. When you are going to bind
some server to this kind of port you have to ensure root (or
Administrator under Windows) privileges.
(define-port! 'foo-icmp-port '((proto . icmp) (ipaddr . 127.0.0.1) (type . 8)))
Simple definition of a UDP port configuration with the name
foo-udp-port.
(define-port! 'foo-udp-port `((proto . udp) (port . 27952)))
Next:.
Previous:.).
Next:::.
Next:.)
Next:.
Previous:.
Next:.
Previous: Configuring servers, Up: Server [Contents][Index]
Next::.
Next: Gnutella Spider, Previous: Foo Server, Up: Existing servers [Contents][Index]
The SNTP server can be queried with the ‘netdate’ command. It is used to synchronize time and dates between Internet hosts. The protocol is described in the ARPA Internet RFC 868. Thus it is not really an SNTP server as described by RFC 2030 (Simple Network Time Protocol (SNTP) Version 4 for IPv4, IPv6 and OSI). It is rather an excellent example on how to implement a UDP server in Serveez.
This protocol provides a site-independent, machine readable date and time. The Time service sends back.
The configuration of this server does not require any item.
Next:.
Previous:.
Next: Writing coservers, Up: Coserver [Contents][Index]
If it is necessary to complete blocking tasks in Serveez you have to use coservers. The actual implementation differs on platforms. On Unices they are implemented as processes communicating with Serveez over pipes. On Win32 Serveez uses threads and shared memory.
Next:.
Previous:; }
Next:.
Next: Embedding API, Up: Embedding [Contents][Index]
This chapter deals with embedding the Serveez core library into standalone C/C++ applications and using it in order to write additional servers.
Next:.
Previous: Compiling and linking, Up: Embedding Serveez [Contents][Index]
The following small example shows how to use the Serveez core library to
print the list of known network interface. As you will notice there are
three major steps to do: Include the library header with
#include
<libserveez.h>, initialize the library via
svz_boot and finalize
it via
svz_halt. In between these calls you can use all of the
API functions, variables and macros described in Embedding API.
#include <stdio.h> #include <stdlib.h> #include <libserveez.h> static int display_ifc (const svz_interface_t *ifc, void *closure) { char *addr = svz_inet_ntoa (ifc->ipaddr); if (ifc->description) /* interface with description */ printf ("%40s: %s\n", ifc->description, addr); else /* interface with interface # only */ printf ("%31s%09lu: %s\n", "interface # ", ifc->index, addr); return 0; } int main (int argc, char **argv) { /* Library initialization. */ svz_boot ("example"); /* Display a list of interfaces. */ printf ("local interfaces:\n"); svz_foreach_interface (display_ifc, NULL); /* Library finalization. */ svz_halt (); return EXIT_SUCCESS; }
Previous:]
Next: Memory management, Up: Embedding API [Contents][Index]
The configure script used to build libserveez takes many options
(see Build and install).
Some of these are encapsulated by
svz_library_features.
Return a list (length saved to count) of strings representing the features compiled into libserveez.
Here is a table describing the features in detail:
debug
Present when ‘--enable-debug’.
heap-counters
Present when ‘--enable-heap-count’.
interface-list
Present when ‘--enable-iflist’.
poll
Present when ‘--enable-poll’ and you have poll(2).
sendfile
Present when ‘--enable-sendfile’ and you have sendfile(2)
or some workalike (e.g.,
TransmitFile).
log-mutex
Present when
svz_log uses a mutex around its internal stdio
operations, implying that you have some kind of thread capability
(perhaps in a separate library). If your system has
fwrite_unlocked, the configure script assumes that
fwrite
et al already operate in a locked fashion, and disables this.
flood-protection
Present when ‘--enable-flood’.
core
The networking core. This is always present.
Next:.
Next:.
Previous:.
Next::).
Next: Pipe connections, Up: Client connections [Contents][Index]
TCP sockets provide a reliable, stream oriented, full duplex connection between two sockets on top of the Internet Protocol (IP). TCP guarantees that the data arrives in order and retransmits lost packets. It generates and checks a per packet checksum to catch transmission errors. TCP does not preserve record boundaries.
Create a TCP connection to host host and set the socket descriptor
in structure sock to the resulting socket. Return
NULL on
errors.
Read all data from sock and call the
check_request
function for the socket, if set. Return -1 if the socket has died,
zero otherwise.
This is the default function for reading from sock.
If the underlying operating system supports urgent data (out-of-band) in
TCP streams, try to send the byte in
sock->oob through the socket
structure sock as out-of-band data. Return zero on success and -1
otherwise (also if urgent data is not supported).
Next: UDP sockets, Previous: TCP sockets, Up: Client connections [Contents][Index]
The pipe implementation supports both named and anonymous pipes. Pipe servers are implemented as listeners on a file system FIFO on Unices or “Named Pipes” on Windows (can be shared over a Windows network).
A FIFO special file is similar to a pipe, except that it is created in a different way. Instead of being an anonymous communications channel, a FIFO special file is entered into the file system..
Create a socket structure containing both the pipe descriptors
recv_fd and send_fd. Return
NULL on errors.
Create a (non blocking) pair of pipes. This differs in Win32 and Unices. Return a non-zero value on errors.
Create a pipe connection socket structure to the pair of named
pipes recv and send. Return
NULL on errors.
Return 1 if handle is invalid, otherwise 0.
Invalidate the handle pointed at by href.
Close handle. Return 0 if successful, -1 otherwise.
Next:.
Next: Raw sockets, Previous: UDP sockets, Up: Client connections [Contents][Index]
The ICMP socket implementation is currently used in the tunnel server
which comes with the Serveez package. It implements a user protocol
receiving and sending ICMP packets by opening a raw socket with the
protocol
IPPROTO_ICMP.
The types of ICMP packets passed to the socket can be filtered using the
ICMP_FILTER socket option (or by software as done here). ICMP
packets are always processed by the kernel too, even when passed to a
user socket.
Create an ICMP socket for receiving and sending.
Return
NULL on errors, otherwise an enqueued socket structure.
“If you are calling this function we will send an empty ICMP packet signaling that this connection is going down soon.” [ttn sez: huh?]
Send buf with length length via this ICMP socket sock. If length supersedes the maximum ICMP message size the buffer is split into smaller packets.
Next:!]
Previous:.)
Next:: Codec functions, Previous: Socket management, Up: Embedding API [Contents][Index]
This section describes the internal coserver interface of Serveez. Coservers are helper processes meant to perform blocking tasks. This is necessary because Serveez itself is single threaded. Each coserver is connected via a pair of pipes to the main thread of Serveez communicating over a simple text line protocol. Each request/response is separated by a newline character.
Call func for each coserver, passing additionally the second arg closure. If func returns a negative value, return immediately with that value (breaking out of the loop), otherwise, return 0.
Under woe32 check if there was any response from an active coserver. Moreover keep the coserver threads/processes alive. If one of the coservers dies due to buffer overrun or might be overloaded, start a new one.
Call this function whenever there is time, e.g., within the timeout of the
select system call.
Destroy specific coservers with the type type. All instances of this coserver type will be stopped.
Create and return a single coserver with the given type type.
Return the type name of coserver.
Enqueue a request for the reverse DNS coserver to resolve address addr, arranging for callback cb to be called with two args: the hostname (a string) and the opaque data closure.
Enqueue a request for the DNS coserver to resolve host, arranging for callback cb to be called with two args: the ip address in dots-and-numbers notation and the opaque data closure.
Enqueue a request for the ident coserver to resolve the client identity at sock, arranging for callback cb to be called with two args: the identity (string) and the opaque data closure.
To make use of coservers, you need to start the coserver interface by
calling
svz_updn_all_coservers once before, and once after,
entering the main server loop.
If direction is non-zero, init coserver internals. Otherwise, finalize them. Return 0 if successful.
If direction is positive, init also starts one instance each of the builtin servers. If negative, it doesn’t.
Next:.
Next:.
Next:).
Previous: General server type functionality, Up: Server types [Contents][Index]
The core API of Serveez is able to register server types dynamically at runtime. It uses the dynamic linker capabilities of the underlying operating system to load shared libraries (or DLLs on Win32). This has been successfully tested on Windows and GNU/Linux. Other systems are supported but yet untested. Please tell us if you notice misbehaviour of any sort.
Set the additional search paths for the serveez library. The given array
of strings gets
svz_freed.
Create an array of strings containing each an additional search path.
The loadpath is hold in the environment variable ‘SERVEEZ_LOAD_PATH’
which can be set from outside the library or modified using
svz_dynload_path_set. The returned array needs to be destroyed
after usage.
Next: Port config funcs, Previous: Server types, Up: Embedding API [Contents][Index]
A server in Serveez is an instantiated (configured) server type. It is
merely a copy of a specific server type with a unique server name, and
is represented by
svz_server_t in the core library.
Next: Server configuration, Up: Server functions [Contents][Index]
This section contains functions dealing with the list of known servers in the core library of Serveez, also with the basics like creation and destruction of such servers.
Call func for each server, passing additionally the second arg closure.
Find a server instance by the given configuration structure cfg.
Return
NULL if there is no such configuration in any server
instance.
Return a list of clients (socket structures) which are associated
with the given server instance server. If there is no such
socket, return
NULL. Caller should
svz_array_destroy
the returned array.
Get the server instance with the given instance name name.
Return
NULL if there is no such server yet.
If direction is non-zero, run the initializers of all servers, returning -1 if some server did not think it is a good idea to run. Otherwise, run the local finalizers for all server instances.
Next: Server binding, Previous: Server functionality, Up: Server functions [Contents][Index]
These functions provide an interface for configuring a server. They are used to create and modify the default configuration of a server type in order to create a server configuration.
Instantiate a configurable type. The type argument specifies the configurable type name, name the name of the type (in the domain of the configurable type) and instance the instance name of the type. Return zero on success, otherwise -1.
Release the configuration cfg of the given configuration
prototype prototype. If cfg is
NULL, do nothing.
Create a collection of type, given the count
items of data. Valid values of type are one of:
SVZ_INTARRAY,
SVZ_STRARRAY,
SVZ_STRHASH.
For a string hash, data should be alternating keys and values;
the returned hash table will have
count / 2 elements.
The C type of data for an int array should be
int[],
and for string array or hash it should be
char*[].
On error (either bad type or odd count for string hash),
return
NULL.
Here are some convenience macros for
svz_collect:
Return an integer array
svz_array_t *
created from
int cvar[].
Return a string array
svz_array_t *
created from
char *cvar[].
Return a string hash
svz_hash_t *
created from
char *cvar[].
Next: Server core, Previous: Server configuration, Up: Server functions [Contents][Index]
The following functionality represents the relationship between port configurations as described in Port config funcs and server instances. When binding a server to a specific port configuration the core library creates listeners as needed by itself.
Bind the server instance server to the port configuration port if possible. Return non-zero on errors, otherwise zero. It might occur that a single server is bound to more than one network port if, e.g., the TCP/IP address is specified by ‘*’ (asterisk) since this gets expanded to the known list of interfaces.
Return an array of port configurations to which the server instance
server is currently bound to, or
NULL if there is no such
binding. Caller should
svz_array_destroy the returned array
when done.
Return an array of listening socket structures to which the server
instance server is currently bound to, or
NULL if there
is no such binding. Caller should
svz_array_destroy the
returned array when done.
Return the array of server instances bound to the listening
sock, or
NULL if there are no bindings. Caller
should
svz_array_destroy the returned array when done.
Checks whether the server instance server is bound to the server socket structure sock. Return one if so, otherwise zero.
Format a space-separated list of current port configuration
bindings for server into buf, which has size
bytes. The string is guaranteed to be nul-terminated. Return the
length (at most
size - 1) of the formatted string.
Next:.
Previous:!]
Next::.
Next:.
Previous:.
Next: Bibliography, Previous: Embedding, Up: Top [Contents][Index]
Serveez was always designed with an eye on maximum
portability. Autoconf and Automake have done a great job at this.
A lot of
#define’s help to work around some of the different
Unix’ oddities. Have a look at config.h for a complete list
of all these conditionals.
Most doubtful might be the Win32 port. There are two different ways of compiling Serveez on Win32: Cygwin and MinGW. The Cygwin version of Serveez depends on the Unix emulation layer DLL cygwin1.dll. Both versions work but it is preferable to use MinGW for performance reasons. The Cygwin version is slow and limited to a very low number (some 64) of open files/network connections.2
There are major differences between the Win32 and Unix implementations due to the completely different API those systems provide.
Because process communication is usually done by a pair of unidirectional pipes we chose that method in order to implement the coservers in Unix. The Win32 implementation are threads which are still part of the main process.
On Win32 systems there is a difference in network sockets and file descriptors. Thus we had to implement quite a complex main socket loop.
Both systems Unix and Win32 do provide this functionality (Windows NT 4.0
and above). The main differences here are the completely different APIs.
On a common Unix you create a named pipe within the filesystem via
mkfifo. On Win32 you have to
CreateNamedPipe which
will create some special network device. A further difference is what you
can do with these pipes. On Win32 systems this ‘network device’ is
valid on remote machines.
Named pipes on Unix are unidirectional, on Win32 they are bidirectional
and instantiatable.
There are some difference between the original Winsock 1.1 API and the new version 2.2.x..
The Winsock DLL and import library for version 1.1 are wsock32.dll and wsock32.lib and for version 2.2 it is ws2_32.dll and ws2_32.lib. Serveez is currently using version 2.2.
The Winsock API is still a bit buggy. Connected datagram behaviors are
not pertinent to any WinSock 2 features, but to generic WinSock. On Win95
it do not see any reason for, but
anyway ...).
Raw sockets require Winsock 2. To use them under Windows NT/2000, you must be logged in as an Administrator. On any other Microsoft’s we were trying to use the ICMP.DLL (an idiotic and almost useless API) without success. Microsoft says they will replace it as soon as something better comes along. (Microsoft’s been saying this since the Windows 95 days, however, yet this functionality still exists in Windows 2000.) It seems like you cannot send ICMP or even raw packets from the userspace of Windows (except via the ICMP.DLL which is limited to echo requests). We also noticed that you cannot receive any packets previously sent. The only thing which works on all Windows systems (9x/ME/NT/2000/XP) is receiving packets the “kernel” itself generated (like echo replies). One good thing we noticed about Windows 2000 is that the checksums of fragmented ICMP packets get correctly recalculated. That is not the case in the current Linux kernels.
To use the Win32 Winsock in the Cygwin port, you just need to
#define Win32_Winsock and
#include "windows.h" at the top
of your source file(s). You will also want to add
-lwsock32 to
the compiler’s command line so you link against libwsock32.a.
What preprocessor macros++.
Why we do not use pipes for coservers ?
Windows differentiates between sockets and file descriptors, that is why
you can not
select file descriptors. Please
close the
pipe’s descriptors via
CloseHandle and not
closesocket,
because this will fail.
The C run-time libraries have a preset limit for the number of files that can be open at any one time. The limit for applications that link with the single-thread static library (LIBC.LIB) is 64 file handles or 20 file streams. Applications that link with either the static or dynamic multithread library (LIBCMT.LIB or MSVCRT.LIB and MSVCRT.DLL), have a limit of 256 file handles or 40 file streams. Attempting to open more than the maximum number of file handles or file streams causes program failure.
As far as I know, one of the big limitations of Winsock is that
the SOCKET type is *not* equivalent to file descriptor. It is
however with BSD and POSIX sockets. That is one of the major reasons for
using a separate data type, SOCKET, not an int, a typical type for a
file descriptor. This implies that you cannot mix SOCKETs and stdio,
sorry. This is the case when you use
-mno-cygwin.
Actually they are regular file handles, just like any other. There is a
bug in all 9x/kernel32 libc/msv/crtdll interface implementations
GetFileType returns
TYPE_UNKNOWN for socket handles. Since
this is AFAIK the only unknown type there is, you know you have a socket
handle. There is a fix in the more recent perl distributions that you can
use as a general solution.
-loldnames -lperlcrt -lmsvcrt will get
you
TYPE_CHAR for socket handles.
Now follows the list on which operating systems and architectures Serveez has been build and tested successfully.
Next: GNU Free Documentation License, Previous: Porting issues, Up: Top [Contents][Index]
This section contain some of the documents and resources we read and used to implement various parts of this package. They appear in no specific order.
Next: Index, Previous: Bibliography,]
that is, if your system supports it
This was written circa 2003—maybe the situation is now improved. | http://www.gnu.org/software/serveez/manual/serveez.html | CC-MAIN-2015-06 | refinedweb | 4,988 | 67.25 |
Hi,
I am using EF class objects as grid datasources generated by command "dotnet ef dbcontext scaffold ...", which reside in another .Net Core project of the VS solution.
The parent class looks like:
namespace myDB.Models
{
public partial class Parent
{
public Parent()
{
ParentChild = new HashSet<ParentChild>();
}
public int Id { get; set; }
//some other properties...
public virtual ICollection<ParentChild > ParentChild { get; set; }
}
}
This basically works fine when I only use a simple table with SingleBand.
Now I would like to also show the parent's children in the grid.
The child class:
namespace myDB.Models
{
public partial class ParentChild
{
public int Id { get; set; }
public int ParentId { get; set; }
public string Name { get; set; }
//some other properties...
public virtual Parent Parent { get; set; }
}
}
But in the grid it does not show me the children columns, instead I see 2 columns "IsReadOnly" and "Count". It seems like it has a problem with the HashSet or the ICollection property (which are generated by the .Net Core command by default).
Is there a way to solve this without having to edit the generated classes too much?
Thank you,
best regards
Hello Daniel,
I have been investigating into the issue that you are reporting, and I have put together a sample class structure that is consistent with the one you have provided, but in doing so, I cannot seem to reproduce the behavior you are seeing. The UltraGrid is handling the hierarchy normally on my end. Perhaps there is another partial “Parent” or “ParentChild” class in your application somewhere?
I was speaking with my colleague and he had also recommended using a BindingList<T> instead of using an ICollection<T> or HashSet<T> for your “ParentChild” hierarchy in this case. I am unsure if this will help you here, though, as HashSet<T> is working on my end. 18.2.20182.175 in Infragistics for Windows Forms 2018 Volume 2..
Please let me know if you have any other questions or concerns on this matter.
UltraGridHashSetTest.zip
HashSet<T> and ICollection<T> are not good classes to use for data sources of bound controls.
I'm frankly surprised that HashSet<T> works at all. I guess the grid and the BindingManager are able to populate data from it because it's IEnumerable. But there will be functionality in the grid that will not work when using these types.
For example, if you turn on the AddNew row in the grid for either the parent or child band in your example, clicking on the AddNew row will raise an error message that the data source does not supporting adding rows.
Try to delete a row and you will get a similar message.
Try showing the grid initially and then adding or removing rows from the data source and the grid will not be notified of these changes and so the new rows will not show up and the deleted rows will still be there.
There may be other cases that do not work, and I would not be surprised if there were unexpected results in other operations.
I strongly recommend using BindingList<T>. That class is specifically designed for data binding.
Hi Andrew and Mike,
in a partial class, I added
public virtual BindingList<ParentChild> ParentChildrenForGrid { get; set; } = new BindingList<ParentChild>();
Using this property, it shows the correct child columns in the grid.
Thank you!
Update:
For anyone who is interested, I added the needed changes to my Github project which optimizes the "dotnet ef scaffold" output.
The "--winforms" parameter replaces ICollection with IList, and HashSet with BindingList. | https://www.infragistics.com/community/forums/f/ultimate-ui-for-windows-forms/119431/ultragrid-child-band-not-showing-columns-when-using-ef-classes-generated-by-dotnet-command | CC-MAIN-2019-04 | refinedweb | 592 | 62.38 |
Istio 404 (Not Found) error
Debugging a 404 (Not Found) error on Istio can be frustrating. Hopefully this will give you a place to start tracking down where things may be going wrong.
Wildcard Gateway conflict
There can be only one Gateway definition that uses a wildcard "*" hosts value. If you've deployed anything else that includes a wildcard Gateway, client calls will fail with a 404 status.
Example:
$ istioctl get gateways GATEWAY NAME HOSTS NAMESPACE AGE bookinfo-gateway * default 20s httpbin-gateway * default 3s
If so, you'll need to delete or change one of the conflicting gateways.
Trace where the route is failing
Istio is like an onion (or, perhaps, an Ogre), it has layers. A systematic way to debug a 404 is to work outward from the target.
The backend workload
Verify you can access the workload from the sidecar:
kubectl exec $WORKLOAD_POD -c istio-proxy -- curl localhost:80/headers
The backend sidecar
Set your service address and get the IP address of the workload pod.
SERVICE=httpbin.default.svc.cluster.local:80 POD_IP=$(kubectl get pod $WORKLOAD_POD -o jsonpath='{.status.podIP}')
Access the workload through the sidecar:
kubectl exec $WORKLOAD_POD -c istio-proxy -- curl -v --resolve "$SERVICE:$POD_IP"
Or, if Istio mTLS is enabled:
kubectl exec $WORKLOAD_POD -c istio-proxy -- curl -v --resolve "$SERVICE:$POD_IP" --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --insecure
The gateway (or a frontend sidecar)
Access the service from the gateway:
kubectl -n istio-system exec $GATEWAY_POD -- curl -v
Or, if Istio mTLS is enabled:
kubectl -n istio-system exec $GATEWAY_POD -- curl -v --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem --insecure
Missing analytics
If you aren't seeing analytics in the Analytics UI, consider these possible causes:
- Apigee intake can be delayed a few minutes
- Envoy gRPC Access Log not configured correctly
- Envoy cannot reach Remote Service
- Remote Service is failing upload
Missing or bad API key not being rejected
If API key validation is not working properly, consider these possible causes:
Direct proxy
Check the
ext-authz configuration.
- Be sure listener is configured for intercept.
- Check the
ext-authzconfiguration.
Invalid requests are being checked and allowed
- Remote Service configured for fail open
- Envoy not configured for RBAC checks
Missing or bad JWT not being rejected
The probable cause is that the Envoy JWT filter is not configured.
Valid API key fails
Probable causes
- Envoy cannot reach the remote service
- Your credentials are not valid
- Apigee API Product not configured for target and env
Troubleshooting steps
Check your API Product on Apigee
- Is it enabled for your environment (test vs prod)?
The product must be bound to the same environment as your Remote Service.
- Is it bound to the target you're accessing?
Check the Apigee remote service targets section. Remember, the service name must be a fully qualified host name. If it's an Istio service, the name will be something like
helloworld.default.svc.cluster.localcode> - which represents the
helloworldservice in the
defaultnamespace.
- Does the Resource Path match your request?
Remember, a path like
/or
/**will match any path. You may also use '*' or '**' wildcards for matching.
- Do you have a Developer App?
The API Product must be bound to a Developer App to check its keys.
Check your request
- Are you passing the Consumer Key in the
x-api-key header
Example:
curl -H "x-api-key: wwTcvmHvQ7Dui2qwj43GlKJAOwmo"
- Are you using a good Consumer Key?
Ensure the Credentials from the App you're using are approved for your API Product.
Check the Remote Service logs
- Start the Remote Service with logging at the
debug level
Use
-l debugoption on the command line.
- Attempt to access your target and check the logs
Check the logs for a line that looks something like this:
Resolve api: helloworld.default.svc.cluster.local, path: /hello, scopes: [] Selected: [helloworld] Eliminated: [helloworld2 doesn't match path: /hello] | https://docs.apigee.com/api-platform/envoy-adapter/troubleshooting | CC-MAIN-2020-29 | refinedweb | 662 | 54.63 |
This article is in the Book Review chapter. Reviews are intended to provide you with information on books - both paid and free - that others consider useful and of value to developers. Read a good programming book? Write a review!
The .NET Framework SDK and Visual Studio .NET make it easy to use legacy COM components from .NET-based code (also referred to as managed code). The Framework does a lot of work for you by abstracting the differences between the unmanaged world of COM and the managed world of .NET through interception. Interception occurs in a piece of code referred to as the Runtime-Callable Wrapper (RCW) whose role is to seamlessly integrate the .NET and COM worlds so that they can peacefully coexist. For example, COM components report an error using a special return value that is an HRESULT data type. Your managed (.NET-based) application errors to be reported as exceptions and is unaware of the HRESULT data type. The RCW converts failure-related HRESULTs codes (codes other than the S_OK success code and user-defined HRESULTs) into equivalent .NET exceptions providing a programming model that's consistent with the rest of the .NET programming environment. When a COM component returns a user-defined HRESULT, the RCW maps it into a System.Runtime.InteropServices.COMException and stores the HRESULT value in the exception's ErrorCode property - again providing .NET applications with a consistent programming model that makes it easy to detect and respond to error conditions.
That's only a very small part of the .NET and COM interoperability story - this book covers the rest of it in great detail in an engaging way that brings some kick to a perhaps seemingly boring topic. Among the engaging aspects of this book are the "Digging Deeper", "Tip", and other notes that sprinkled throughout which provide valuable insights to the discussion at hand.
.NET and COM weighs in at 1578 pages with about 1245 pages of content, 250 pages of appendices, and 81 pages taking up the comprehensive index (all page counts are approximate since the book includes separator pages between sections and chapters). The book's content is packed with practical information in the form of guidance and detailed information right from the first page. However, the sheer density of the information makes it easy to miss important points such as the author's recommendations on how to version .NET components (assemblies) to make it easy for COM-based clients to use them in the unmanaged world.
The book is divided into nine sections:
Each section builds on it predecessors' but not all readers may agree with the organization. I personally would have preferred the Designing sections to appear before the Using sections since good design is essential to easy use. Although most readers will likely read the book in the order it's presented in, the book is organized to make it easy to hop around from section to section of interest without getting lost.
Although this is a great book, there are a couple of drawbacks:
Serviced components, also known as COM+ components, are given less than 10 pages of coverage throughout the book. Granted that COM+ sounds like it provides services only to COM components, it does play an important role in the managed world of .NET applications. In fact, COM+ is so important to .NET that there are an entire set of classes and attributes available in a dedicated namespace called System.EnterpriseServices. COM+ provides services that extend .NET applications' abilities over those provided by basic .NET.
The second drawback relates to how some important information gets disbursed throughout the entire book rather than just one place. For example, when you create a .NET component that will be used by COM clients, you have the option of using one of three attributes to expose the .NET component's interface to COM. The fist time this feature is mentioned is in chapter three on page 397 where, if you read to the end of the "FAQ" box, you'll see a reference to chapter 12 where more information on the topic appears on page 556. What's interesting is that one of the most critical points appears in one of two "Caution" notes making it difficult to see that there's something really important on the page. In addition, some more information appears, along with some of the text from chapter 12, much later in appendix A (on page 1257) with a reference back to chapter 12. All of these references and presentation styles work to dilute the importance of the information and effectively hide it from novice readers (those that are new to .NET, COM, or both).
Despite these few drawbacks, this is a great book for intermediate .NET developers that have some experience with COM can benefit from. The book's scope makes this book useful for developers that work on both COM-centric and .NET-centric environments.. | http://www.codeproject.com/Articles/4203/NET-and-COM-The-Complete-Interoperability-Guide?fid=15588&df=10000&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=Relaxed | CC-MAIN-2015-35 | refinedweb | 824 | 62.68 |
Technote (FAQ)
Question
What information is available that provides a brief overview of the MultiVersion File System (MVFS) that is required for the use of IBM Rational ClearCase dynamic views on Microsoft Windows, UNIX and Linux?
Answer
The information below is meant to supplement the existing documentation which can be found in the IBM Rational ClearCase Information Center under the topic of The multiversion file system.
Overview
The MultiVersion File System (MVFS) creates a virtual file system specifically designed for accessing data within a Rational ClearCase VOB.
The MVFS works similarly to UNIX® Network File System (NFS), in that it loads a kernel driver that presents a file system to the user through a standard interface within the Windows, UNIX or Linux kernel.
When you start a view and mount a VOB, remote procedure calls (RPC) are made to the view to determine which cleartext files should be presented to the user.
After the MVFS gets a file name and caches it, the operating system (OS) opens a call to the underlying file system where the view or the VOB storage directory resides.
The MVFS runs in the operating system kernel and cannot be stopped or started independently of the OS. Thus, to stop and restart the MVFS on UNIX, Linux or Windows, you must shut down and restart the computer.
The MVFS extends the host’s native operating system to provide file system support for dynamic views. A dynamic view is an MVFS directory that enables dynamic access to VOB elements. Dynamic views use the MVFS to present a selected combination of local and remote files as if they were stored in the native file system.
Notes:
- Rational ClearCase LT does not support MVFS.
- Both Snapshot and Web views do not use the MVFS.
Here are some distinct and similar MVFS capabilities on Windows versus UNIX or Linux.
UNIX and Linux
On any UNIX or Linux host where the MVFS is installed:
- The /view directory functions as the mount point for the MVFS namespace.
- The code that implements the MVFS is (statically or dynamically) linked with a host’s operating system, and how the MVFS is linked depends on the type and version of the operating system.
- The MVFS on UNIX and Linux is always case-sensitive; it always uses case-sensitive file look-up and does no case conversion.
- File names that include these characters are recognized by the MVFS on UNIX and Linux: ? * / \ | < >
- A UNIX or Linux host can export a view-extended path name to some VOB mount point (for example, /view/exportvu/vobs/vegaproj) to allow non-ClearCase read-only access from a host that does not have Rational ClearCase installed.
- The supported file types are Files, Directories and Symbolic links. You cannot create other file types, such as UNIX special files, within a dynamic view.
Microsoft Windows
On a Windows client with MVFS installed:
- Each dynamic view appears as a share under a special network name (\\view, by default) as well as a directory under the client’s MVFS drive (drive M, by default).
- The MVFS is a file system driver that is loaded by the Service Control Manager at system start up.
- The MVFS logs error and status messages to the file C:\mvfslogs. You can use the MVFS tab in the ClearCase program in Control Panel to change this path name.
- The MVFS can be configured to support various case-sensitivity and case-preservation options since the native Windows file system is case-insensitive and case-preserving, and performs case-insensitive file look-up.
- File names that include these characters are not recognized by the MVFS on Windows (and cannot be loaded into a Windows snapshot view): ? * / \ | < >
- The supported file types are Files and Directories.
Related information
MVFS drive denotation changed from \\view\ to \\view
MVFS does not support clustered systems or kernels
About Non-ClearCase Access on UNIX or Linux
Bad Command or file name executing a 16 bit program in
MVFS group membership limitation
Install or uninstall the MVFS on Windows | http://www-01.ibm.com/support/docview.wss?uid=swg21230196 | CC-MAIN-2015-32 | refinedweb | 671 | 57.2 |
I just started programming in C, and while practicing with for loops, I came up with the following piece of code:
#include <stdio.h>
int main()
{
int x;
for (x=0;x=10;x=x+1)
printf("%d\n",x);
return 0;
}
The condition part of your
for loop is wrong. What you are doing is :
for (x = 0; x = 10; x = x +1) { // Operations }
The condition you have got here is
x = 10 which is an affectation. So
x = 10 will return 10, which also means
true. Your
for loop is equivalent to :
for (x = 0; true; x = x + 1) { // Operations }
This is why you have got an infinite loop, you should replace the affectation operator
= by the comparason one with two equals sign
==. This means the
for will loop while
x is equals to 10. | https://codedump.io/share/ZV8OKw15Nfxi/1/unintentional-infinite-39for39-loop | CC-MAIN-2017-17 | refinedweb | 137 | 75.34 |
On Sun, Jun 16, 2013 at 4:38 AM, Glyph <glyph at twistedmatrix.com> wrote: > > (I was going to say that without a format string I couldn't have my > stand-in UUID feature, but then I realized that namespace + set of keys is > probably good enough to generate that too, so never mind. Also it seems > like I'm the only one who likes that feature so maybe it doesn't matter!) > I don't have anything useful to contribute at the moment, but you can add me to the list of people who like that feature. -- mithrandi, i Ainil en-Balandor, a faer Ambar -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://twistedmatrix.com/pipermail/twisted-python/2013-June/027093.html | CC-MAIN-2018-22 | refinedweb | 114 | 75.34 |
NVMEMDRV
Detailed Description
NVM Non-volatile Memory Wear-Leveling Driver.
Deprecated:
- This driver is deprecated and marked for removal in a later release. New code should use NVM3.
Introduction
This driver allows you to store application data in NVM. The driver supports wear leveling to maximize the lifetime of the underlying NVM system. CCITT CRC16 is used for data validation.
The size and layout of the data objects to be managed by this driver must be known at compile-time. Object may be composed of any primitive data type (8, 16 or 32-bit).
This driver consists of the files nvm.c,
nvm.h and
nvm_hal.h. Additionally, a implementation of nvm_hal.c is required for the specific NVM system to be used. A implementation of nvm_hal.c for EFM32/EZR32/EFR32 Flash memory is included with this driver. Driver configuration parameters and specification of the data objects are located in nvm_config.c and nvm_config.h.
Configuration Options
The files nvm_config.c and nvm_config.h contains compile-time configuration parameters and a specification of the user data structures to be managed by the driver, and how these are mapped to pages. A page can be of type normal or wear. A wear page can only contain a single object, but they provide better performance and drastically increase the lifetime of the memory if the object is known to have a low update frequency.
nvm_config.c implements an user data example. The arrays colorTable, coefficientTable, etc are defined and assigned to NVM pages. A pointer to each page is assigned to the page table nvmPages. The page table also contain a page type specifier. A page is either of type nvmPageTypeNormal or nvmPageTypeWear. Pages of type nvmPageTypeNormal are written to the unused page with the lowest erase count. For pages of type nvmPageTypeWear, the data is first attempted fitted in a already used page. If this fails, then a a new page is selected based on the lowest erase count. Pages of type nvmPageTypeWear can only contain one data object.
In nvm_config.h, driver features can be enabled or disabled. The following parameters may require special attention:
- NVM_MAX_NUMBER_OF_PAGES: Maximum number of NVM pages allocated to the driver.
- NVM_PAGES_SCRATCH: Configure extra pages to allocate for data security and wear leveling.
- NVM_PAGE_SIZE: Page size for the NVM system. Default is the size of the flash.
Users have to be aware of the following limitations:
- Maximum 254 objects in a page.
- Maximum 256 pages allocated to the driver. The default is 32 pages.
Note that the different EFM32/EZR32/EFR32 families have different page sizes. Please refer to the reference manual for details.
The API
This section contain brief descriptions of the functions defined by the API. You will find detailed information on input and output parameters and return values by clicking on the hyperlinked function names. Most functions return an error code or ECODE_EMDRV_NVM_OK is returned on success. See
ecode.h and
nvm.h for other error codes.
Your application code must include one header file:
nvm.h.
The application may define the data objects allocated in RAM (and defined in nvm_config.c) as extern if direct access to these objects is required, eg:
extern uint32_t colorTable[];
The driver requires that the NVM system is erased by calling NVM_Erase() before the driver initialization function NVM_Init() is called. NVM_Init() requires a parameter to the configuration data. A pointer to the configuration data can be obtained by calling NVM_ConfigGet().
NVM_Write() takes two parameters, a page ID and a object ID. These two parameters must correspond to the definition of the user data in nvm_config.c. For example, colorTable is assigned to page 1 in the example version of nvm_config.c. To write the data in colorTable to NVM, call NVM_Write(MY_PAGE_1, COLOR_TABLE_ID).
NVM_Read() reads the a data object or an entire page in NVM back to the structures defined for the page in RAM.
Example
#include "em_chip.h" #include "em_gpio.h" #include "nvm.h" // Data object extern declarations matching the example data defined in nvm_config.c extern uint32_t colorTable[]; extern uint8_t coefficientTable[]; extern uint8_t primeNumberTable[]; extern uint16_t bonusTable[]; extern uint8_t privateKeyTable[]; extern uint16_t transformTable[]; extern int32_t safetyTable[]; extern uint8_t bigEmptyTable[450]; extern int8_t smallNegativeTable[]; extern uint16_t shortPositiveTable[]; extern uint32_t singleVariable; // Object and page IDs maching the data defined in nvm_config.c typedef enum { COLOR_TABLE_ID, COEFFICIENT_TABLE_ID, PRIME_NUMBER_TABLE_ID, BONUS_TABLE_ID, PRIVATE_KEY_TABLE_ID, TRANSFORM_TABLE_ID, SINGLE_VARIABLE_ID, SAFETY_TABLE_ID, BIG_EMPTY_TABLE_ID, SMALL_NEGATIVE_TABLE_ID, SHORT_POSITIVE_TABLE_ID } NVM_Object_Ids; typedef enum { MY_PAGE_1, MY_PAGE_2, MY_PAGE_3, MY_PAGE_4, MY_PAGE_5, MY_PAGE_6, } NVM_Page_Ids; CHIP_Init(); // Erase all pages managed by the driver and set the erase count // for each page to 0. To retain the erase count, pass NVM_ERASE_RETAINCOUNT. NVM_Erase(0); if (ECODE_EMDRV_NVM_NO_PAGES_AVAILABLE == NVM_Init(NVM_ConfigGet()) { // The driver could not initialize any pages } // Write all pages to NVM NVM_Write(MY_PAGE_1, NVM_WRITE_ALL_CMD)); NVM_Write(MY_PAGE_2, NVM_WRITE_ALL_CMD)); NVM_Write(MY_PAGE_3, NVM_WRITE_ALL_CMD)); NVM_Write(MY_PAGE_4, NVM_WRITE_ALL_CMD)); NVM_Write(MY_PAGE_5, NVM_WRITE_ALL_CMD)); NVM_Write(MY_PAGE_6, NVM_WRITE_ALL_CMD)); // Set some data elements to 0 for (i = 0; i < 4; i++) { bonusTable[i] = 0; primeNumberTable[i] = 0; } // Read back from NVM and check NVM_Read(MY_PAGE_1, NVM_READ_ALL_CMD)); NVM_Read(MY_PAGE_4, PRIME_NUMBER_TABLE_ID)); for (i = 0; i < 4; i++) { if (bonusTable[i] == 0) { // Should not happen because bonusTable[] in NVM should contain the // constants set in nvm_config.c } if (primeNumberTable[i] == 0) { // Should not happen because primeNumberTable[] in NVM should contain the // constants set in nvm_config.c } }
Macro Definition Documentation
Success return value.
Return/error codes
Definition at line
47 of file
nvm.h.
Referenced by NVM_Erase(), NVM_Init(), NVM_Read(), and NVM_Write().
Retains the registered erase count when eraseing a page.
Definition at line
64 of file
nvm.h.
Referenced by NVM_Erase().
Structure defining end of pages table.
Definition at line
67 of file
nvm.h.
All objects are read to RAM.
Definition at line
61 of file
nvm.h.
Referenced by NVM_Read().
All objects are written from RAM.
Definition at line
57 of file
nvm.h.
Referenced by NVM_Write().
All objects are copied from the old page.
Definition at line
59 of file
nvm.h.
Function Documentation
Erase the entire allocated NVM area.
Use this function to erase the entire non-volatile memory area allocated to the NVM system. It is possible to set a fixed erase count for all the pages, or retain the existing one. To retain the erase count might not be advisable if an error has occurred since this data may also have been damaged.
- Parameters
-
- Returns
- Returns the result of the erase operation.
Definition at line
354 of file
nvm.c.
References ECODE_EMDRV_NVM_ERROR, ECODE_EMDRV_NVM_OK, NVM_ERASE_RETAINCOUNT,
NVMHAL_PageErase(),
NVMHAL_Read(), and
NVMHAL_Write().
Initialize the NVM manager.
Use this function to initialize and validate the NVM. Should be run on startup. The result of this process is then returned in the form of a Ecode_t.
If ECODE_EMDRV_NVM_OK is returned, everything went according to plan and you can use the API right away. If ECODE_EMDRV_NVM_NO_PAGES_AVAILABLE is returned this is a device that validates, but is empty. The proper way to handle this is to first reset the memory using NVM_Erase, and then write any initial data.
If a ECODE_EMDRV_NVM_ERROR, or anything more specific, is returned something irreparable happened, and the system cannot be used reliably. A simple solution to this would be to erase and reinitialize, but this will then cause data loss.
- Parameters
-
- Returns
- Returns the result of the initialization.
Definition at line
184 of file
nvm.c.
References ECODE_EMDRV_NVM_ERROR, ECODE_EMDRV_NVM_NO_PAGES_AVAILABLE, ECODE_EMDRV_NVM_OK,
NVMHAL_Init(), and
NVMHAL_Read().
Read an object or an entire page.
Use this function to read an object or an entire page from memory. It takes a page id and an object id (or the NVM_READ_ALL constant to read everything) and reads data from flash and puts it in the memory locations given in the page specification.
- Parameters
-
- Returns
- Returns the result of the read operation.
Definition at line
768 of file
nvm.c.
References ECODE_EMDRV_NVM_DATA_INVALID, ECODE_EMDRV_NVM_OK, ECODE_EMDRV_NVM_PAGE_INVALID, NVM_READ_ALL_CMD, and
NVMHAL_Read().
Write an object or a page.
Use this function to write an object or an entire page to NVM. It takes a page and an object and updates this object with the data pointed to by the corresponding page entry. All the objects in a page can be written simultaneously by using NVM_WRITE_ALL instead of an object ID. For normal pages it simply finds an unused page in flash with the lowest erase count and copies all objects belonging to this page updating objects defined by objectId argument. For wear pages, this function tries to find spare place in already used page and write object here. If there is no free space, it uses a new page while invalidating previously used one.
- Parameters
-
- Returns
- Returns the result of the write operation.
Definition at line
428 of file
nvm.c.
References ECODE_EMDRV_NVM_ERROR, ECODE_EMDRV_NVM_OK, NVM_WRITE_ALL_CMD,
NVMHAL_Read(), and
NVMHAL_Write(). | https://docs.silabs.com/mcu/5.8/efm32lg/group-NVM | CC-MAIN-2020-16 | refinedweb | 1,436 | 59.3 |
I am trying to create a batched version of the method that I am writing and I wanted to compute the loss over the whole dataset and then optimize for the specific slices. An example of what i want to do can be seen in
import torch as th from torch.autograd import * x = Variable(th.arange(4), requires_grad=True) loss = th.sum(th.max(x ** 3, th.zeros(4))) print(th.autograd.grad(loss, x)) print(th.autograd.grad(loss, x[:2]))
where I wish the last print would give the derivative for the first two elements. So I need someway of slicing the data while preserving the graph. How can I do this, without changing the way that I compute the loss? | https://discuss.pytorch.org/t/optimize-over-a-slice-of-the-data/22659 | CC-MAIN-2022-21 | refinedweb | 124 | 76.62 |
Quoting Eric Blake (eblake redhat com): > On 05/03/2012 11:55 AM, Stefan Berger wrote: > >> > >> +#ifdef HAVE_LIBNL1 > >> +#define nl_alloc nl_handle_alloc > >> +#define nl_free nl_handle_destroy > >> +typedef struct nl_handle nlhandle_t; > >> +#else > >> +#define nl_alloc nl_socket_alloc > >> +#define nl_free nl_socket_free > >> +typedef struct nl_sock nlhandle_t; > >> +#endif > >> + > > > > I would not #define in the namespace of that library (nl_*). > > Agreed that a vir* namespace is safer. > > > What about > > the following: > > > #ifdef HAVE_LIBNL1 > > > > static struct nl_handle * > > virNLHandleAlloc(void) > > { > > return nl_handle_alloc(); > > } > > One further: > > typedef struct nl_handle virNLHandle; > > static virNLHandle * > virNLHandleAlloc(void) ... > > so that the rest of the code is indeed isolated into virNL wrappers with > no additional #ifdefs. Yup, I like it, thanks guys. I don't know whether I'll have time to send a new patch tomorrow. If not I'll aim to write one over the weekend, but if someone else wants to make the (somewhat trivial) updates I won't feel upstaged :) thanks, -serge | https://www.redhat.com/archives/libvir-list/2012-May/msg00254.html | CC-MAIN-2015-11 | refinedweb | 148 | 59.74 |
setting up general app path for DLLs and avoiding Could not load type '<project>.<class>' error
Discussion in 'ASP .Net' started by Wolfgang Kaml, Jan 17, 2004. load type "namespace.class" Parser ErrorKris, Dec 18, 2003, in forum: ASP .Net
- Replies:
- 1
- Views:
- 992
- Teemu Keiski
- Dec 19, 2003
Could not load type .... (assebley in GAC, with new singleton class)Franz, Sep 11, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 1,454
- Jared
- Sep 11, 2004
Re: Could not load type error when loading project to a web hosting servercbDevelopment, Mar 13, 2006, in forum: ASP .Net
- Replies:
- 0
- Views:
- 467
- cbDevelopment
- Mar 13, 2006
- Replies:
- 1
- Views:
- 2,446
- gamo
- Apr 13, 2010 | http://www.thecodingforums.com/threads/setting-up-general-app-path-for-dlls-and-avoiding-could-not-load-type-project-class-error.70818/ | CC-MAIN-2015-11 | refinedweb | 115 | 84.47 |
How to store class info into an Array@MikeyBoy
Yes, I realized that after and corrected it.
Now, if I do info[count].SetRating(), it t...
How to store class info into an ArrayThis is the second class:
[code]
//FTime.h
#pragma once
#include <string>
using namespace std;
cl...
How to store class info into an ArraySo I have an array of my own class, "Restaurant," but I'm having a hard time figuring out how to sto...
Storing class in ArrayHow do I go about storing info to an array of class?
I need to do something like this:
info[co...
Zero and setprecisionThat did it. The trick was to cout the array inside the if statement, I see now.
Thank you.
This user does not accept Private Messages | http://www.cplusplus.com/user/danielmtnz/ | CC-MAIN-2013-20 | refinedweb | 130 | 76.93 |
A continuous integration/continuous deployment (CI/CD) pipeline is the spine of the modern DevOps environment. It bridges the gap between the development and operations teams by automating the building, testing and deployment of applications. This article tells you how to set up a CI/CD pipeline using Kubernetes.
Jenkins is an open source/free continuous integration and continuous delivery tool, which can be used to automate the building, testing and deployment of software. It is generally considered the most accepted automation server, and is used by more than a million users worldwide. Jenkins is the best choice for implementing CI/CD.
In this article, we will first try to understand what a CI/CD pipeline is and why it is important. We will then try to set up a CI/CD pipeline with the help of Kubernetes. So, let’s start.
What is a pipeline and what is CI/CD?
In computer science, a pipeline can also be called a data pipeline. It is a set of data processing elements connected in series, where the output of one element is the input of the next one. The components of a pipeline are often executed in parallel or in a time-sliced fashion. While CI stands for continuous integration, CD stands for continuous delivery/continuous deployment. Evidently, continuous integration is a set of exercises that makes development teams implement small changes and check code to version control repositories regularly. The main goal of CI is to create a consistent and automated way to build, package and test applications. Continuous delivery picks up where continuous integration ends. CD automates the delivery of applications to particular infrastructure environments. Many teams work with numerous environments other than production, such as development and QA environments, and CD makes sure there is an automated way to push code changes to them.
Why use it?
The CI/CD pipeline focuses resources on things that matter by automating the process and delegating it to a CI/CD pipeline. Resources are freed for actual product development tasks and the chance of errors is reduced.
Increase transparency and visibility: When a CI/CD pipeline is set up, the entire team knows what’s going on with the build as well as gets the latest results of tests, which means the team can raise issues and plan its work in context.
Detect and fix issues early: CI/CD pipeline deployment is automated and fast, which means the tester/QA gets more time to detect problems and developers get more time to fix them. It can be built and deployed any number of times without any effort. Hence, software becomes more bug-free.
Improve quality and testability: Easier testing makes it easier to achieve quality. Testability has many dimensions —it can be considered by how observable, controllable and decomposable the outcome is. Testability is affected by how effortlessly new builds are accessible and what tools are used. Continuous integration and delivery writes tests, runs them, and also delivers builds regularly and consistently.
The prerequisites for setting up a CI/CD pipeline are:
1. Docker engine should be installed on the platform.
2. minikube and kubectl should be installed on the platform.
To run the Kubernetes cluster, follow the steps given below:
$ minikube start
Once the cluster has been generated, its status can be confirmed by entering:
$ minikube status
Setting up a CI/CD pipeline with Kubernetes
Before setting up the pipeline, you should be familiar with Kubernetes, which is an open source container orchestration tool. Its main function is to direct containerised applications on clusters of nodes by serving operators; organise, scale, update, and maintain their services; and provide mechanisms for service discovery.
Setting up/installing Jenkins on Kubernetes
First, we need to install Helm, which is the package manager for Kubernetes:
$ curl > get_helm.sh $ chmod 700 get_helm.sh $ ./get_helm.sh -v v3.5.2
After that, we have to configure Helm. To do this, add the Jenkins repo, as shown below. We also need to install Tiller for Helm to run correctly:
$.
Next, we need to run the inspect command to verify the configuration values of the deployment:
$ helm inspect values stable/jenkins > values.yml
Keep a watchful check on the configuration values and make changes if needed. Then install the chart:
$ helm install stable/jenkins --tls \ --name jenkins \ --namespace jenkins
The installation process will display some instructions for what has to be done next.
Points to remember
Get your ‘admin’ user password by running:
printf $(kubectl get secret --namespace default my-jenkins -o jsonpath=”{.data.jenkins-admin-password}” | base64 --decode);echo
Get the Jenkins URL these steps and they will start the proxy server at.
Open the above localhost URL, and enter your user name and password that has been created earlier. Your personal Jenkins server will open in a few minutes.
Kubernetes and CI/CD practices are an awesome match. We have learnt what a CI/CD pipeline is, why to use it, and how to install it with Kubernetes. Remember what have been installed here are the most basic plugins; there are still lots of configuration options that can be applied. We can also use Kubernetes to scale up the CI/CD pipeline. Hopefully, in the next article we will learn about that. | https://www.opensourceforu.com/2021/05/setting-up-a-ci-cd-pipeline-with-kubernetes/ | CC-MAIN-2022-27 | refinedweb | 876 | 55.44 |
A simple module for generating a bunch of SOPs for different schools.
Project description
Statement of Purpose (SOP) Generator -- docx version (DansonGo 5)
A simple module for generating a bunch of SOPs for different schools.
Description
Using the same content framework, the package will generate multiple SOPs for different schools by replacing the school and program names with the target school and program in the statement.
Prerequisites
- An excel that includes school names and program names (e.g., school_list.xlsx)
- A docx file of your statement template (e.g., SOP_template.docx)
How to use (Mac)
Method 1: Run as package
Step 1: Pip Install package from your terminal
pip install DansonGo-5
Step 2: Generate SOPs
from SOP_GEN.GEN_SOP import GEN_SOP import os # Initial parameters school_list = "~/school_list.xlsx" # The excel file path of your school and program list SOP_temp_file = "~/SOP_template.docx" # The docx file path of your SOP template. School_var = "School" # The column name where school names are saved in your school list. Program_var = "Program" # The column name where program names are saved in your school list. output_path = os.getcwd() # Output path where you want to save your output files. # Generate SOPs GEN_SOP(school_list,SOP_temp_file,School_var,Program_var,output_path).gen_sop()
Method 2: Run from terminal
Step 1: Clone the SOP_GEN from GitHub.
Step 2: Run "GEN_SOP.py" in terminal from the folder that contains "GEN_SOP.py" and two prerequisite files.
Rename your school list excel file as "school_list.xlsx".
In "school_list.xlsx", please make sure that the school and program column names are "School" and "Program", respectively. (TAKE CARE OF THE FIRST CAPITAL LETTER)
Rename your docx as "SOP_template.docx"
In your "SOP_template.docx", please label the school position as "[SCHOOL_NAME]". Please label the program position as "[PROGRAM_NAME]"
Run the command below from terminal
python GEN_SOP.py
Contributor
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/DansonGo-5/ | CC-MAIN-2020-05 | refinedweb | 324 | 59.3 |
Hello, I'm having issues with my final total for charges. When I run my program, it either comes up with "0" or if I don't declare my variable, it comes up with a long string of numbers. My total hours works great, but not my charges. Can someone take a look at my code and see if they see something I need to look at? I thought it was my setprecision or fixed, but it's not. Thanks
#include <iostream> using std::cout; using std::cin; using std::endl; using std::ios; using std::fixed; #include <iomanip> using std::setw; using std::setiosflags; using std::setprecision; #include <cmath> //function prototype double calculateCharge(double); double Totalhours(double); int count; int total_hours; int total_charge; int main() { int num; int charge; int count = 1; int total_hours = 0 ; int total_charge = 0 ; for( int i = 0; i < 3; ++i ) { cout << "enter hours parked "; cin >> num ; cout<< setw(4)<< "CAR " << setw(20)<< "HOURS" << setw(18)<< "CHARGE\n"; cout<< count << setw(20)<< num << setw(18)<< fixed << setprecision( 2 ) << calculateCharge (charge) << endl; charge = charge; count = count + 1; total_hours = total_hours + num; total_charge = total_charge + charge; } cout<< setw(05)<< "TOTAL HOURS" << setw(10)<< total_hours << setw(16)<< fixed << setprecision( 2 ) << endl; cout << fixed << setprecision( 2 ); cout<< setw(10)<< "TOTAL CHARGES"<< setw(10)<< total_charge << setw(16)<< endl; cout<< setw(10)<< "TOTAL CHARGES"<< setw(10)<< charge << setw(16)<< endl; return 0; }//end main double calculateCharge( double x) { double charge; if (x <= 3) charge = 2; else if (x >19) charge = 10; else if (x > 3) charge = 2 + (x - 3) * (.5); return charge; } | https://www.daniweb.com/programming/software-development/threads/59948/calculate-total-charges | CC-MAIN-2019-04 | refinedweb | 259 | 54.9 |
#include <unistd.h>
#include <sys/uio.
After a write() to a regular file has successfully returned:
Write requests to a pipe or FIFO are handled the same as a regular file with the following exceptions:
When attempting to write to a file descriptor (other than a pipe, a FIFO, a socket, or a STREAM) that supports nonblocking writes and cannot accept the data immediately::
The write() and pwrite() functions will fail if:
The pwrite() function fails and the file pointer remains unchanged if:
The write() and writev() functions may fail if:
A write to a STREAMS file may fail if an error message has been received at the STREAM head. In this case, errno is set to the value included in the error message.
The writev() function may fail if:), streamio(7I) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman2/write.2.html | CC-MAIN-2014-42 | refinedweb | 132 | 55.61 |
In this article I want to explore what happens when a statically linked program gets executed on Linux. By statically linked I mean a program that does not require any shared objects to run, even the ubiquitous libc. In reality, most programs one encounters on Linux aren't statically linked, and do require one or more shared objects to run. However, the running sequence of such programs is more involved, which is why I want to present statically linked programs first. It will serve as a good basis for understanding, allowing me to explore most of the mechanisms involved with less details getting in the way. In a future article I will cover the dynamic linking process in detail.
The Linux kernel
Program execution begins in the Linux kernel. To run a program, a process will call a function from the exec family. The functions in this family are all very similar, differing only in small details regarding the manner of passing arguments and environment variables to the invoked program. What they all end up doing is issuing the sys_execve system call to the Linux kernel.
sys_execve does a lot of work to prepare the new program for execution. Explaining it all is far beyond the scope of this article - a good book on kernel internals can be helpful to understand the details [1]. I'll just focus on the stuff useful for our current discussion.
As part of its job, the kernel must read the program's executable file from disk into memory and prepare it for execution. The kernel knows how to handle a lot of binary file formats, and tries to open the file with different handlers until it succeeds (this happens in the function search_binary_handler in fs/exec.c). We're only interested in ELF here, however; for this format the action happens in function load_elf_binary (in fs/binfmt_elf.c).
The kernel reads the ELF header of the program, and looks for a PT_INTERP segment to see if an interpreter was specified. Here the statically linked vs. dynamically linked distinction kicks in. For statically linked programs, there is no PT_INTERP segment. This is the scenario this article covers.
The kernel then goes on mapping the program's segments into memory, according to the information contained in the ELF program headers. Finally, it passes the execution, by directly modifying the IP register, to the entry address read from the ELF header of the program (e_entry). Arguments are passed to the program on the stack (the code responsible for this is in create_elf_tables). Here's the stack layout when the program is called, for x64:
At the top of the stack is argc, the amount of command-line arguments. It is followed by all the arguments themselves (each a char*), terminated by a zero pointer. Then, the environment variables are listed (also a char* each), terminated by a zero pointer. The observant reader will notice that this argument layout is not what one usually expects in main. This is because main is not really the entry point of the program, as the rest of the article shows.
Program entry point
So, the Linux kernel reads the program's entry address from the ELF header. Let's now explore how this address gets there.
Unless you're doing something very funky, the final program binary image is probably being created by the system linker - ld. By default, ld looks for a special symbol called _start in one of the object files linked into the program, and sets the entry point to the address of that symbol. This will be simplest to demonstrate with an example written in assembly (the following is NASM syntax):
section .text ; The _start symbol must be declared for the linker (ld) global _start _start: ; Execute sys_exit call. Argument: status -> ebx mov eax, 1 mov ebx, 42 int 0x80
This is a very basic program that simply returns 42. Note that it has the _start symbol defined. Let's build it, examine the ELF header and its disassembly:
$ nasm -f elf64 nasm_rc.asm -o nasm_rc.o $ ld -o nasm_rc64 nasm_rc.o $ readelf -h nasm_rc64 ELF Header: Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 Class: ELF64 ... Entry point address: 0x400080 ... $ objdump -d nasm_rc64 nasm_rc64: file format elf64-x86-64 Disassembly of section .text: 0000000000400080 <_start>: 400080: b8 01 00 00 00 mov $0x1,%eax 400085: bb 2a 00 00 00 mov $0x2a,%ebx 40008a: cd 80 int $0x80
As you can see, the entry point address in the ELF header was set to 0x400080, which also happens to be the address of _start.
ld looks for _start by default, but this behavior can be modified by either the --entry command-line flag, or by providing an ENTRY command in a custom linker script.
The entry point in C code
We're usually not writing our code in assembly, however. For C/C++ the situation is different, because the entry point familiar to users is the main function and not the _start symbol. Now it's time to explain how these two are related.
Let's start with this simple C program which is functionally equivalent to the assembly shown above:
int main() { return 42; }
I will compile this code into an object file and then attempt to link it with ld, like I did with the assembly:
$ gcc -c c_rc.c $ ld -o c_rc c_rc.o ld: warning: cannot find entry symbol _start; defaulting to 00000000004000b0
Whoops, ld can't find the entry point. It tries to guess using a default, but it won't work - the program will segfault when run. ld obviously needs some additional object files where it will find the entry point. But which object files are these? Luckily, we can use gcc to find out. gcc can act as a full compilation driver, invoking ld as needed. Let's now use gcc to link our object file into a program. Note that the -static flag is passed to force static linking of the C library and the gcc runtime library:
$ gcc -o c_rc -static c_rc.o $ c_rc; echo $? 42
It works. So how does gcc manage to do the linking correctly? We can pass the -Wl,-verbose flag to gcc which will spill the list of objects and libraries it passed to the linker. Doing this, we'll see additional object files like crt1.o and the whole libc.a static library (which has objects with telling names like libc-start.o). C code does not live in a vacuum. To run, it requires some support libraries such as the gcc runtime and libc.
Since it obviously linked and ran correctly, the program we built with gcc should have a _start symbol at the right place. Let's check [2]:
$ readelf -h c_rc ELF Header: Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00 Class: ELF64 ... Entry point address: 0x4003c0 ... $ objdump -d c_rc | grep -A15 "<_start" 00000000004003c0 <_start>: 4003c0: 31 ed xor %ebp,%ebp 4003c2: 49 89 d1 mov %rdx,%r9 4003c5: 5e pop %rsi 4003c6: 48 89 e2 mov %rsp,%rdx 4003c9: 48 83 e4 f0 and $0xfffffffffffffff0,%rsp 4003cd: 50 push %rax 4003ce: 54 push %rsp 4003cf: 49 c7 c0 20 0f 40 00 mov $0x400f20,%r8 4003d6: 48 c7 c1 90 0e 40 00 mov $0x400e90,%rcx 4003dd: 48 c7 c7 d4 04 40 00 mov $0x4004d4,%rdi 4003e4: e8 f7 00 00 00 callq 4004e0 <__libc_start_main> 4003e9: f4 hlt 4003ea: 90 nop 4003eb: 90 nop
Indeed, 0x4003c0 is the address of _start and it's the program entry point. However, what is all that code at _start? Where does it come from, and what does it mean?
Decoding the start sequence of C code
The startup code shown above comes from glibc - the GNU C library, where for x64 ELF it lives in the file sysdeps/x86_64/start.S [3]. Its goal is to prepare the arguments for a function named __libc_start_main and call it. This function is also part of glibc and lives in csu/libc-start.c. Here is its signature, formatted for clarity, with added comments to explain what each argument means:
int __libc_start_main( /* Pointer to the program's main function */ (int (*main) (int, char**, char**), /* argc and argv */ int argc, char **argv, /* Pointers to initialization and finalization functions */ __typeof (main) init, void (*fini) (void), /* Finalization function for the dynamic linker */ void (*rtld_fini) (void), /* End of stack */ void* stack_end)
Anyway, with this signature and the AMD64 ABI in hand, we can map the arguments passed to __libc_start_main from _start:
main: rdi <-- $0x4004d4 argc: rsi <-- [RSP] argv: rdx <-- [RSP + 0x8] init: rcx <-- $0x400e90 fini: r8 <-- $0x400f20 rdld_fini: r9 <-- rdx on entry stack_end: on stack <-- RSP
You'll also notice that the stack is aligned to 16 bytes and some garbage is pushed on top of it (rax) before pushing rsp itself. This is to conform to the AMD64 ABI. Also note the hlt instruction at address 0x4003e9. It's a safeguard in case __libc_start_main did not exit (as we'll see, it should). hlt can't be executed in user mode, so this will raise an exception and crash the process.
Examining the disassembly, it's easy to verify that 0x4004d4 is indeed main, 0x400e90 is __libc_csu_init and 0x400f20 is __libc_csu_fini. There's another argument the kernel passes _start - a finish function for shared libraries to use (in rdx). We'll ignore it in this article.
The C library start function
Now that we understood how it's being called, what does __libc_start_main actually do? Ignoring some details that are probably too specialized to be interesting in the scope of this article, here's a list of things that it does for a statically linked program:
- Figure out where the environment variables are on the stack.
- Prepare the auxiliary vector, if required.
- Initialize thread-specific functionality (pthreads, TLS, etc.)
- Perform some security-related bookkeeping (this is not really a separate step, but is trickled all through the function).
- Initialize libc itself.
- Call the program initialization function through the passed pointer (init).
- Register the program finalization function (fini) for execution on exit.
- Call main(argc, argv, envp)
- Call exit with the result of main as the exit code.
Digression: init and fini
Some programming environments (most notably C++, to construct and destruct static and global objects) require running custom code before and after main. This is implemented by means of cooperation between the compiler/linker and the C library. For example, the __libc_csu_init (which, as you can see above, is called before the user's main) calls into special code that's inserted by the linker. The same goes for __libc_csu_fini and finalization.
You can also ask the compiler to register your function to be executed as one of the constructors or destructors. For example [4]:
#include <stdio.h> int main() { return 43; } __attribute__((constructor)) void myconstructor() { printf("myconstructor\n"); }
myconstructor will run before main. The linker places its address in a special array of constructors located in the .ctors section. __libc_csu_init goes over this array and calls all functions listed in it.
Conclusion
This article demonstrates how a statically linked program is set up to actually run on Linux. In my opinion, this is a very interesting topic to study because it demonstrates how several large components of the Linux eco-system cooperate to enable the program execution process. In this case, the Linux kernel, the compiler and linker, and the C library are involved. In a future article I will present the more complex case of a dynamically linked program, where another agent joins the game - the dynamic linker. Stay tuned.
| http://eli.thegreenplace.net/2012/08/13/how-statically-linked-programs-run-on-linux/ | CC-MAIN-2015-48 | refinedweb | 1,954 | 61.97 |
#include <gfx_image_pixelhandler.h>
Helper class returned by SetPixelHandler. This class provides/caches a fast the access to pixels. A lambda contains the most efficient code to access the pixel data. This object is only valid as long as the bitmap properties (pixel format, pixel storage layout, width, height) wont change. The SetPixelHandlerStruct must be accessed only from once thread. If you want to sett pixel in a multi threaded way you need to get a SetPixelHandlerStruct for each thread.
Default Constructor. Initializes everything with nullptr.
Move Constructor.
Constructor to initialize the helper class.
Destructor.
Returns true if the structure is initialized correctly and a SetPixelHandler is set.
Copies/Writes the pixel data from the buffer to the bitmap.
Returns the modified region that was touched by all the SetPixel() calls of this handler. | https://developers.maxon.net/docs/Cinema4DCPPSDK/html/structmaxon_1_1_set_pixel_handler_struct.html | CC-MAIN-2022-40 | refinedweb | 133 | 59.5 |
Icons
There is no point in hiding it - the
icons module is really awesome! Icons behave just like shapes, but there are 766 of them, as the module allows you to use any of the free Font Awesome icons.
To use the module you have to import it first by writing
import icons. This will let the Shrew know to prepare the awesome icons. For example, the following code paints a red truck:
To explore all 766 icons, you can visit the list on fontawesome.com. When using the icons in Code Shrew, change their names to CammelCase. For example, to use an icon named
bowling-ball, write
icons.BowlingBall().
Icons' properties and methods
Just like shapes, icons can be modified using properties and methods. The following list applies to every icon.
Icons' properties
- width (default: depends on the icon) - the width of the icon
- height (default: depends on the icon) - the height of the icon
- x (default:
50) - the horizontal coordinate describing where the center of the icon should be
- y (default:
50) - the vertical coordinate describing where the center of the icon should be
- color (default:
"black") - the color or color gradient of the icon
- transparency (default:
0) - how transparent the icon should be (from 0 to 100)
- rotation (default:
0) - the amount of icon's rotation (from 0 to 360 degrees)
Icons' methods
- copy() - create a new icon with the same properties as the original one
- flip_horizontal() - flip the icon horizontally
- flip_vertical() - flip the icon vertically
- enlarge(amount) - make the icon
amounttimes bigger (for example,
my_rectangle.enlarge(2)will make the rectangle two times bigger) | https://shrew.app/documentation/icons/ | CC-MAIN-2018-51 | refinedweb | 268 | 55.98 |
Description
Super-simple GUI to grasp... Powerfully customizable.
Works with multiple GUI Frameworks (tkinter, Qt, WxPython, Remi(browser based) to supply a single source code solution that runs on any of these platforms. Write your GUI code once, run it on your choice of GUI frameworks.
Looking to take your Python code from the world of command lines and into the convenience of a GUI? Have a Raspberry Pi with a touchscreen that's going to waste because you don't have the time to learn a GUI SDK? Struggling to work with OOP GUI Frameworks? Look no further, you've found your GUI package.
------------------------------------------------------------------
It's trivial for beginners to grasp and the end-result, on the screen, is identical to what pages of code the underlying GUI packages would produce. PySimpleGUI is an SDK that embraces the Python language.
Over 200 pages of documentation and cookbook recipes to give you a jump start and enables you to get a GUI on the screen in 10 minutes.
PySimpleGUI alternatives and similar packages
Based on the "GUI" category.
Alternatively, view PySimpleGUI alternatives based on common mentions on social networks and blogs.
kivy9.5 8.6 L2 PySimpleGUI VS kivyOpen source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
DearPyGui8.3 9.8 PySimpleGUI VS DearPyGuiDear PyGui: A fast and powerful Graphical User Interface Toolkit for Python with minimal dependencies
Eel8.0 5.4 PySimpleGUI VS EelA little Python library for making simple Electron-like HTML/JS GUI apps [Moved to:]
Toga7.3 9.3 L5 PySimpleGUI VS TogaA Python native, OS native GUI toolkit.
Flexx7.1 5.6 L3 PySimpleGUI VS FlexxWrite desktop and web apps in pure Python
pywebview6.9 7.4 PySimpleGUI VS pywebviewBuild GUI for your Python program with JavaScript, HTML, and CSS
urwid6.7 2.5 L2 PySimpleGUI VS urwidConsole user interface library for Python (official repo)
enaml5.0 7.7 L3 PySimpleGUI VS enamlDeclarative User Interfaces for Python
wxPython3.8 0.1 L2 PySimpleGUI VS wxPythonA blending of the wxWidgets C++ class library with the Python.
PyGObject3.8 6.1 L5 PySimpleGUI VS PyGObjectTutorial for using GTK+ 3 in Python
EasyGUI3.5 5.1 PySimpleGUI VS EasyGUIeasygui for Python
PySide3.2 0.0 L4 PySimpleGUI VS PySideATTENTION: This project is deprecated, please refer to PySide2
Python bindings for Sciter2.9 4.3 L4 PySimpleGUIimpleGUI VS signalum-desktopA Desktop application for the signalum python library
pyglet0.5 - PySimpleGUI VS pygletA cross-platform windowing and multimedia library for Python.
cursesBuilt-in wrapper for ncurses used to create terminal GUI applications.
PyQtPython bindings for the Qt cross-platform application and UI framework, with support for both Qt v4 and Qt v5 frameworks.
TkinterTkinter is Python's de-facto standard GUI package.
Optimize your datasets for ML
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of PySimpleGUI or a related project?
README
Python GUIs for Humans
Transforms the tkinter, Qt, WxPython, and Remi (browser-based) GUI frameworks into a simpler interface. The window definition is simplified by using Python core data types understood by beginners (lists and dictionaries). Further simplification happens by changing event handling from a callback-based model to a message passing one.
Your code is not required to have an object oriented architecture which makes the package usable by a larger audience. While the architecture is simple to understand, it does not necessarily limit you to only simple problems.
Some programs are not well-suited for PySimpleGUI however. By definition, PySimpleGUI implements a subset of the underlying GUI frameworks' capabilities. It's difficult to define exactly which programs are well suited for PySimpleGUI and which are not. It depends on the details of your program. Duplicating Excel in every detail is an example of something not well suited for PySimpleGUI.
Japanese version of this readme.
I could use a coffee! It fuels consultants, editors, domain registration and so many other things required for PySimpleGUI to be a thriving project. Every donation helps, and help is needed and appreciated.
Statistics 📈
PyPI Statistics & Versions
GitHub Statistics
What Is PySimpleGUI ❓
PySimpleGUI is a Python package that enables Python programmers of all levels to create GUIs. You specify your GUI window using a "layout" which contains widgets (they're called "Elements" in PySimpleGUI). Your layout is used to create a window using one of the 4 supported frameworks to display and interact with your window. Supported frameworks include tkinter, Qt, WxPython, or Remi. The term "wrapper" is sometimes used for these kinds of packages.
Your PySimpleGUI code is simpler and shorter than writing directly using the underlying framework because PySimpleGUI implements much of the "boilerplate code" for you. Additionally, interfaces are simplified to require as little code as possible to get the desired result. Depending on the program and framework used, a PySimpleGUI program may require 1/2 to 1/10th amount of code to create an identical window using one of the frameworks directly.
While the goal is to encapsulate/hide the specific objects and code used by the GUI framework you are running on top of, if needed you can access the frameworks' dependent widgets and windows directly. If a setting or feature is not yet exposed or accessible using the PySimpleGUI APIs, you are not walled off from the framework. You can expand capabilities without directly modifying the PySimpleGUI package itself.
Bridging the "GUI Gap"
Python has brought a large number of people into the programming community. The number of programs and the range of areas it touches is mindboggling. But more often than not, these technologies are out of reach of all but a handful of people. The majority of Python programs are "command line" based. This isn't a problem for programmer-types as we're all used to interacting with computers through a text interface. While programmers don't have a problem with command-line interfaces, most "normal people" do. This creates a digital divide, a "GUI Gap".
Adding a GUI to a program opens that program up to a wider audience. It becomes more approachable. GUIs can also make interacting with some programs easier, even for those that are comfortable with a command-line interface. And finally, some problems require a GUI.
Recognition of Open Source Use
In the Demo Programs or one of the PySimpleGUI Account's Repos these packages were used at least one time. Some of your are the goodies on the right of the GUI gap. chatterbot cv2 fitz forecastio gtts matplotlib mido mpl_toolkits notifypy numpy pandas PIL praw psgtray psutil pyfiglet pygame pylab pymunk requests vlc win32api win32con win32gui win32process
LPLG3 as an Example
The licensing terms in the LLC3 Licensing, it states:
-.
Since the above packages each have a similar license clause, I'm listing them here, in what I would consider a "prominent notice" location, that I'm using the fine works of these groups or individuals. They are used in the Demo Programs most likely or one of the Repos that are under this account as this list is all inclusive.
You all have my respect and admiration. You're enabling bigger things. What a special kind of thing to make. Who knows what you've enabled. I believe more people are getting over to your creations and getting to experience them.
tkinter team - PySimpleGUI would be nowhere without your lengthy work & continios dedication. ONE GUI API for 3 different OS's? Really? With no code changes to move between? That's a huge accomplishment. You're #1 to me.
Getting Over "The Bar"
It's been said by some that "the bar is pretty high" when it comes to learning GUI programming in Python.
What happens when the bar is placed on the ground and can be stepped over?
This is one of the questions that the PySimpleGUI project has tried to answer. Here's a humorous look at what's been a not funny situation.
The results have been fascinating to witness and it's been touching to read the accounts of the journeys of users.
Nothing prepared me for the emails that started to arrive soon after the first release of PySimpleGUI. They are heartwarming and heartbreaking tales of life-long dreams of creating a program that required a GUI. Some made a few attempts, giving up each time. Others never started once they started to research what was required.
After recounting the varied and long road to finding PySimpleGUI, the stories became similar. They each found success and expressed joy and gratitude. The joy expressed in these messages was unlike anything I had encountered in the entirety of career in the computing field.
It's been these emails and the messages of gratitude seen here in the GitHub Issues that made dedicating my life to his project a non-decision.
About Me 👋
Hi there! I'm Mike. You'll find me right here, on the PySimpleGUI GitHub, solving problems and continuously pushing PySimpleGUI forward. I've dedicated my days, nights, and weekends to the project and PySimpleGUI users. Our successes are ultimately shared. I'm successful when you're successful.
While I'm a relative newcomer to Python, I've been writing software since the 70s. The majority of my career was spent creating products in Silicon Valley. I bring to PySimpleGUI the same professionalism and dedication as I did to the corporate products I developed. You are my customers now.
Project Goals 🥅
Two of the most important goals of the PySimpleGUI project:
- Having fun
- Your success
Fun as a goal on a serious project sounds odd, but it's a serious goal. I find writing these GUI programs to be a lot of fun. One reason is how little time it takes to write a complete solution. If we're not enjoying the process then someone's going to give up.
There is a significant amount of documentation, a cookbook, 100's of demo programs to get you immediately running, a detailed call reference, YouTube videos, online Trinket demos, and more... all working to create... a fun experience.
Your Success is a shared goal. PySimpleGUI was built for developers. You're my peeps. It's been an unexpected reward to see the results of the combined effort of users and PySimpleGUI. Use the documentation & other materials to help build your application. If you run into trouble, help is available by opening on Issue on the PySimpleGUI GitHub. Take a look at the section on Support below.
Educational Resources 📚 is easy to remember and is where the documentation is located. You'll find tabs across the top that represent several different documents. The documentation is located on "Read The Docs" so that there is a table of contents for each document and they are easy to search.
There are 100s of pages of written documentation and 100s of example programs that will help you be effective very quickly. Rather than requiring days or weeks of investment to learn a single GUI package, you may be able to complete your project in a single afternoon when using PySimpleGUI.
Example 1 - The One-Shot Window
This type of program is called a "one-shot" window because the window is displayed one time, the values collected, and then it is closed. It doesn't remain open for a long time like you would in a Word Processor.
Anatomy of a Simple PySimpleGUI Program
There are 5 sections to a PySimpleGUI program() # Part 5 - Close the Window
The code produces this window
Example 2 - Interactive Window
In this example, our window will remain on the screen until the user closes the window or clicks the Quit button. The main difference between the one-shot window you saw earlier and an interactive window is the addition of an "Event Loop". The Event Loop reads events and inputs from your window. The heart of your application lives in the event loop.
import PySimpleGUI as sg # Define the window's contents layout = [[sg.Text("What's your name?")], [sg.Input(key='-INPUT-')], [sg.Text(size=(40,1), key='-OUTPUT-')], [sg.Button('Ok'), sg.Button('Quit')]] # Create the window window = sg.Window('Window Title', layout) # Display and interact with the Window using an Event Loop while True: event, values = window.read() # See if user wants to quit or window was closed if event == sg.WINDOW_CLOSED or event == 'Quit': break # Output a message to the window window['-OUTPUT-'].update('Hello ' + values['-INPUT-'] + "! Thanks for trying PySimpleGUI") # Finish up by removing from the screen window.close()
This is the window that Example 2 produces.
And here's what it looks like after you enter a value into the Input field and click the Ok button.
Let's take a quick look at some of the differences between this example and the one-shot window.
First, you'll notice differences in the layout. Two changes in particular are important. One is the addition of the
key parameter to the
Input element and one of the
Text elements. A
key is like a name for an element. Or, in Python terms, it's like a dictionary key. The
Input element's key will be used as a dictionary key later in the code.
Another difference is the addition of this
Text element:
[sg.Text(size=(40,1), key='-OUTPUT-')],
There are 2 parameters, the
key we already covered. The
size parameter defines the size of the element in characters. In this case, we're indicating that this
Text element is 40 characters wide, by 1 character high. Notice that there is no text string specified which means it'll be blank. You can easily see this blank row in the window that's created.
We also added a button, "Quit".
The Event Loop has our familiar
window.read() call.
Following the read is this if statement:
if event == sg.WINDOW_CLOSED or event == 'Quit': break
This code is checking to see if the user closed the window by clicking the "X" or if they clicked the "Quit" button. If either of these happens, then the code will break out of the event loop.
If the window wasn't closed nor the Quit button clicked, then execution continues. The only thing that could have happened is the user clicked the "Ok" button. The last statement in the Event Loop is this one:
window['-OUTPUT-'].update('Hello ' + values['-INPUT-'] + "! Thanks for trying PySimpleGUI")
This statement updates the
Text element that has the key
-OUTPUT- with a string.
window['-OUTPUT-'] finds the element with the key
-OUTPUT-. That key belongs to our blank
Text element. Once that element is returned from the lookup, then its
update method is called. Nearly all elements have an
update method. This method is used to change the value of the element or to change some configuration of the element.
If we wanted the text to be yellow, then that can be accomplished by adding a
text_color parameter to the
update method so that it reads:
window['-OUTPUT-'].update('Hello ' + values['-INPUT-'] + "! Thanks for trying PySimpleGUI", text_color='yellow')
After adding the
text_color parameter, this is our new resulting window:
The parameters available for each element are documented in both the call reference documentation as well as the docstrings. PySimpleGUI has extensive documentation to help you understand all of the options available to you. If you lookup the
update method for the
Text element, you'll find this definition for the call:
As you can see several things can be changed for a
Text element. The call reference documentation is a valuable resource that will make programming in PySimpleGUI, uhm, simple.
Layouts Are Funny LOL! 😆
Your window's layout is a "list of lists" (LOL). Windows are broken down into "rows". Each row in your window becomes a list in your layout. Concatenate together all of the lists and you've got a layout...a list of lists.
Here is the same layout as before with an extra
Text element added to each row so that you can more easily see how rows are defined:
layout = [ [sg.Text('Row 1'), sg.Text("What's your name?")], [sg.Text('Row 2'), sg.Input()], [sg.Text('Row 3'), sg.Button('Ok')] ]
Each row of this layout is a list of elements that will be displayed on that row in your window.
Using lists to define your GUI has some huge advantages over how GUI programming is done using other frameworks. For example, you can use Python's list comprehension to create a grid of buttons in a single line of code.
These 3 lines of code:
import PySimpleGUI as sg layout = [[sg.Button(f'{row}, {col}') for col in range(4)] for row in range(4)] event, values = sg.Window('List Comprehensions', layout).read(close=True)
produces this window which has a 4 x 4 grid of buttons:
Recall how "fun" is one of the goals of the project. It's fun to directly apply Python's powerful basic capabilities to GUI problems. Instead of pages of code to create a GUI, it's a few (or often 1) lines of code.
Collapsing Code
It's possible to condense a window's code down to a single line of code. The layout definition, window creation, display, and data collection can all be written in this line of code:
event, values = sg.Window('Window Title', [[sg.Text("What's your name?")],[sg.Input()],[sg.Button('Ok')]]).read(close=True)
The same window is shown and returns the same values as the example showing the sections of a PySimpleGUI program. Being able to do so much with so little enables you to quickly and easily add GUIs to your Python code. If you want to display some data and get a choice from your user, it can be done in a line of code instead of a page of code.
By using short-hand aliases, you can save even more space in your code by using fewer characters. All of the Elements have one or more shorter names that can be used. For example, the
Text element can be written simply as
T. The
Input element can be written as
I and the
Button as
B. Your single-line window code thus becomes:
event, values = sg.Window('Window Title', [[sg.T("What's your name?")],[sg.I()],[sg.B('Ok')]]).read(close=True)
Code Portability
PySimpleGUI is currently capable of running on 4 Python GUI Frameworks. The framework to use is specified using the import statement. Change the import and you'll change the underlying GUI framework. For some programs, no other changes are needed than the import statement to run on a different GUI framework. In the example above, changing the import from
PySimpleGUI to
PySimpleGUIQt,
PySimpleGUIWx,
PySimpleGUIWeb will change the framework.
Porting GUI code from one framework to another (e.g. moving your code from tkinter to Qt) usually requires a rewrite of your code. PySimpleGUI is designed to enable you to have easy movement between the frameworks. Sometimes some changes are required of you, but the goal is to have highly portable code with minimal changes.
Some features, like a System Tray Icon, are not available on all of the ports. The System Tray Icon feature is available on the Qt and WxPython ports. A simulated version is available on tkinter. There is no support for a System Tray icon in the PySimpleGUIWeb port.
Runtime Environments
Integrations
Among the more than 200 "Demo Programs", you'll find examples of how to integrate many popular Python packages into your GUI.
Want to embed a Matplotlib drawing into your window? No problem, copy the demo code and instantly have a Matplotlib drawing of your dreams into your GUI.
These packages and more are ready for you to put into your GUI as there are demo programs or a demo repo available for each:
Installing 💾
Two common ways of installing PySimpleGUI:
- pip to install from PyPI
- Download the file PySimpleGUI.py and place in your application's folder
Pip Installing & Upgrading
The current suggested way of invoking the
pip command is by running it as a module using Python. Previously the command
pip or
pip3 was directly onto a command-line / shell. The suggested way
Initial install for Windows:
python -m pip install PySimpleGUI
Initial install for Linux and MacOS:
python3 -m pip install PySimpleGUI
To upgrade using
pip, you simply add 2 parameters to the line
--upgrade --no-cache-dir.
Upgrade installation on Windows:
python -m pip install --upgrade --no-cache-dir PySimpleGUI
Upgrade for Linux and MacOS:
python3 -m pip install --upgrade --no-cache-dir PySimpleGUI
Single File Installing
PySimpleGUI was created as a single .py file so that it would be very easy for you to install it, even on systems that are not connected to the internet like a Raspberry Pi. It's as simple as placing the PySimpleGUI.py file into the same folder as your application that imports it. Python will use your local copy when performing the import.
When installing using just the .py file, you can get it from either PyPI or if you want to run the most recent unreleased version then you'll download it from GitHub.
To install from PyPI, download either the wheel or the .gz file and unzip the file. If you rename the .whl file to .zip you can open it just like any normal zip file. You will find the PySimpleGUI.py file in one of the folders. Copy this file to your application's folder and you're done.
The PyPI link for the tkinter version of PySimpleGUI is:
The GitHub repo's latest version can be found here:
Now some of you are thinking, "yea, but, wait, having a single huge source file is a terrible idea". And, yea, sometimes it can be a terrible idea. In this case, the benefits greatly outweighed the downside. Lots of concepts in computer science are tradeoffs or subjective. As much as some would like it to be, not everything is black and white. Many times the answer to a question is "it depends".
Galleries 🎨
Work on a more formal gallery of user-submitted GUIs as well as those found on GitHub is underway but as of this writing it's not complete. There are currently 2 places you can go to see some screenshots in a centralized way. Hopefully, a Wiki or other mechanism can be released soon to do justice to the awesome creations people are making.
User Submitted Gallery
The first is a user submitted screenshots issue located on the GitHub. It's an informal way for people to show off what they've made. It's not ideal, but it was a start.
Massive Scraped GitHub Images
The second is a massive gallery of over 3,000 images scraped from 1,000 projects on GitHub that are reportedly using PySimpleGUI. It's not been hand-filtered and there are plenty of old screenshots that were used in the early documentation. But, you may find something in there that sparks your imagination.
Uses for PySimpleGUI 🔨
The following sections showcase a fraction of the uses for PySimpleGUI. There are over 1,000 projects on GitHub alone that use PySimpleGUI. It's truly amazing how possibilities have opened up for so many people. Many users have spoken about previously attempting to create a GUI in Python and failing, but finally achieving their dreams when they tried PySimpleGUI.
Your First GUI
Of course one of the best uses of PySimpleGUI is getting you into making GUIs for your Python projects. You can start as small as requesting a filename. For this, you only need to make a single call to one of the "high-level functions" called
popup. There are all kinds of popups, some collect information.
popup on itself makes a window to display information. You can pass multiple parameters just like a print. If you want to get information, then you will call functions that start with
popup_get_ such as
popup_get_filename.
Adding a single line to get a filename instead of specifying a filename on the command line can transform your program into one that "normal people" will feel comfortable using.
import PySimpleGUI as sg filename = sg.popup_get_file('Enter the file you wish to process') sg.popup('You entered', filename)
This code will display 2 popup windows. One to get the filename, which can be browsed to or pasted into the input box.
The other window will output what is collected.
Rainmeter-Style Windows
The default settings for GUI frameworks don't tend to produce the nicest looking windows. However, with some attention to detail, you can do several things to make windows look attractive. PySimpleGUI makes it easier to manipulate colors and features like removing the title bar. The result is windows that don't look like your typical tkinter windows.
Here is an example of how you can create windows that don't look like your typical tkinter in windows. In this example, the windows have their titlebars removed. The result is windows that look much like those found when using Rainmeter, a desktop widget program.
You can easily set the transparency of a window as well. Here are more examples of desktop widgets in the same Rainmeter style. Some are dim appearing because they are semi-transparent.
Both of these effects; removing the titlebar and making a window semi-transparent, are achieved by setting 2 parameters when creating the window. This is an example of how PySimpleGUI enables easy access to features. And because PySimpleGUI code is portable across the GUI frameworks, these same parameters work for the other ports such as Qt.
Changing the Window creation call in Example 1 to this line of code produces a similar semi-transparent window:
window = sg.Window('My window', layout, no_titlebar=True, alpha_channel=0.5)
Games
While not specifically written as a game development SDK, PySimpleGUI makes the development of some games quite easy.
This Chess program not only plays chess, but it integrates with the Stockfish chess-playing AI.
Several variants of Minesweeper have been released by users.
Card games work well with PySimpleGUI as manipulating images is simple when using the PySimpleGUI
Graph element.
While not specifically written as a game development SDK, PySimpleGUI makes development of some games quite easy.
Media Capture and Playback
Capturing and displaying video from your webcam in a GUI is 4 lines of PySimpleGUI code. Even more impressive is that these 4 lines of code work with the tkinter, Qt, and Web ports. You can display your webcam, in realtime, in a browser using the same code that displays the image using tkinter.
Media playback, audio and video, can also be achieved using the VLC player. A demo application is provided to you so that you have a working example to start from. Everything you see in this readme is available to you as a starting point for your own creations.
Artificial Intelligence
AI and Python have long been a recognized superpower when the two are paired together. What's often missing however is a way for users to interact with these AI algorithms familiarly, using a GUI.
These YOLO demos are a great example of how a GUI can make a tremendous difference in interacting with AI algorithms. Notice two sliders at the bottom of these windows. These 2 sliders change a couple of the parameters used by the YOLO algorithm.
If you were tuning your YOLO demo using only the command line, you would need to set the parameters, once, when you launch the application, see how they perform, stop the application, change the parameters, and finally restart the application with the new parameters.
Contrast those steps against what can be done using a GUI. A GUI enables you to modify these parameters in real-time. You can immediately get feedback on how they are affecting the algorithm.
There are SO many AI programs that have been published that are command-line driven. This in itself isn't a huge hurdle, but it's enough of a "pain in the ass" to type/paste the filename you want to colorize on the command line, run the program, then open the resulting output file in a file viewer.
GUIs have the power to change the user experience, to fill the "GUI Gap". With this colorizer example, the user only needs to supply a folder full of images, and then click on an image to both colorize and display the result.
The program/algorithm to do the colorization was freely available, ready to use. What was missing is the ease of use that a GUI could bring.
Graphing
Displaying and interacting with data in a GUI is simple with PySimpleGUI. You have several options.
You can use the built-in drawing/graphing capabilities to produce custom graphs. This CPU usage monitor uses the
Graph element
Matplotlib is a popular choice with Python users. PySimpleGUI can enable you to embed Matplotlib graphs directly into your GUI window. You can even embed the interactive controls into your window if you want to retain the Matplotlib interactive features.
Using PySimpleGUI's color themes, you can produce graphs that are a notch above default graphs that most people create in Matplotlib.
Front-ends
The "GUI Gap" mentioned earlier can be easily solved using PySimpleGUI. You don't even need to have the source code to the program you wish to add a GUI onto. A "front-end" GUI is one that collects information that is then passed to a command-line application.
Front-end GUIs are a fantastic way for a programmer to distribute an application that users were reluctant to use previously because they didn't feel comfortable using a command-line interface. These GUIs are your only choice for command-line programs that you don't have access to the source code for.
This example is a front-end for a program called "Jump Cutter". The parameters are collected via the GUI, a command-line is constructed using those parameters, and then the command is executed with the output from the command-line program being routed to the GUI interface. In this example, you can see in yellow the command that was executed.
Raspberry Pi
Because PySimpleGUI is compatible back to Python 3.4, it is capable of creating a GUI for your Raspberry Pi projects. It works particularly well when paired with a touchscreen. You can also use PySimpleGUIWeb to control your Pi if it doesn't have a monitor attached.
Easy Access to Advanced Features
Because it's very easy to access many of the underlying GUI frameworks' features, it's possible to piece together capabilities to create applications that look nothing like those produced using the GUI framework directly.
For example, it's not possible to change the color/look-and-feel of a titlebar using tkinter or the other GUI packages, but with PySimpleGUI it's easy to create windows that appear as if they have a custom titlebar.
Unbelievably, this window is using tkinter to achieve what appears to be something like a screensaver.
On windows, tkinter can completely remove the background from your application. Once again, PySimpleGUI makes accessing these capabilities trivial. Creating a transparent window requires adding a single parameter to the call that creates your
Window. One parameter change can result in a simple application with this effect:
You can interact with everything on your desktop, clicking through a full-screen window.
Themes
Tired of the default grey GUIs? PySimpleGUI makes it trivial for your window to look nice by making a single call to the
theme function. There are over 150 different color themes available for you to choose:
With most GUI frameworks, you must specify the color for every widget you create. PySimpleGUI takes this chore from you and will automatically color the Elements to match your chosen theme.
To use a theme, call the
theme function with the name of the theme before creating your window. You can add spaces for readability. To set the theme to "Dark Grey 9":
import PySimpleGUI as sg sg.theme('dark grey 9')
This single line of code changes the window's appearance entirely:
The theme changed colors of the background, text, input background, input text, and button colors. In other GUI packages, to change color schemes like this, you would need to specify the colors of each widget individually, requiring numerous changes to your code.
Support 💪
Your first stop should be the documentation and demo programs. If you still have a question or need help... no problem... help is available to you, at no cost. Simply file an Issue on the PySimpleGUI GitHub repo and you'll get help.
Nearly all software companies have a form that accompanies bug reports. It's not a bad trade... fill in the form, get free software support. This information helps get you an answer efficiently.
In addition to requesting information such as the version numbers of PySimpleGUI and underlying GUI frameworks, you're also given a checklist of items that may help you solve your problem.
Please fill in the form. It may feel pointless to you. It may feel painful, despite it taking just a moment. It helps get you a solution faster. If it wasn't useful and necessary information to help you get a speedy reply and fix, you wouldn't be asked to fill it out. "Help me help you".
Supporting
Financial support for the project is greatly appreciated. To be honest, financial help is needed. It's expensive just keeping the lights on. The domain name registrations, a long list of subscriptions for things like Trinket, consulting help, etc., quickly add up to a sizable recurring cost.
PySimpleGUI wasn't inexpensive to create. While a labor of love, it was very laborious over several years, and quite a bit was invested, and continues to be invested, in creating what you see today.
PySimpleGUI has an open-source license and it would be great if it could remain that way. If you or your company (especially if you're using PySimpleGUI in a company) are benefiting financially by using PySimpleGUI, you have the capability of extending the life of the project for you and other users.
Buy Me A Coffee
Buy Me a Coffee is a great way to publicly support developers. It's quick, easy, and your contribution is recorded so that others can see that you're a supporter of PySimpleGUI. You can also choose to make your donation private.
GitHub Sponsoring
The GitHub recurring sponsorship is how you can sponsor the project at varying levels of support on an ongoing basis. It's how many Open Source developers are able to receive corporate level sponsorship.
Your help in financially contributing to the project would be greatly appreciated. Being an Open Source developer is financially challenging. YouTube video creators are able to make a living creating videos. It's not so easy yet for Open Source developers.
Thank you for the Thank You's
To everyone that's helped, in whatever fashion, I'm very very grateful.
Even taking a moment to say "thank you" helps, and a HUGE number of you have done that. It's been an amazing number actually. I value these thanks and find inspiration in the words alone. Every message is a little push forward. It adds a little bit of energy and keeps the whole project's momentum. I'm so very grateful to everyone that's helped in whatever form it's been.
Contributing 👷
While PySimpleGUI is currently licensed under an open-source license, the project itself is structured like a proprietary product. Pull Requests are not accepted.
One of the best ways for you to contribute code is to write and publish applications. Users are inspired by seeing what other users build. Here's a simple set of steps you can take - Create a GitHub repo, post the code, and include a screenshot in your repo's readme file. Then come back to the PySimpleGUI repo and post a screenshot in Issue #10 or in the project's WIKI.
If there is a feature missing that you need or you have an enhancement to suggest, then open an Issue
Special Thanks 🙏
This version of the PySimpleGUI readme wouldn't have come together without the help from @M4cs. He's a fantastic developer and has been a PySimpleGUI supporter since the project's launch. @israel-dryer is another long-term supporter and has written several PySimpleGUI programs that pushed the envelope of the package's capabilities. The unique minesweeper that uses an image for the board was created by Israel. @jason990420 surprised many when he published the first card game using PySimpleGUI that you see pictured above as well as the first minesweeper game made with PySimpleGUI. @Chr0nicT is the youngest developer I've worked with, ever, on projects. This kid shocks me on a regular basis. Ask for a capability, such as the PySimpleGUI GitHub Issues form error checking bot, and it simply happens regardless of the technologies involved. I'm fortunate that we were introduced. Someday he's going to be whisked away, but until then we're all benefiting from his talent. The Japanese version of the readme was greatly improved with help from @okajun35. @nngogol has had a very large impact on the project, also getting involved with PySimpleGUI in the first year of initial release. He wrote a designer, came up with the familiar window[key] lookup syntax, wrote the tools that create the documentation, designed the first set of doc strings as well as tools that generate the online documenation using the PySimpleGUI code itself. PySimpleGUI would not be where it is today were it not for the help of these individuals.
The more than 2,200 GitHub repos that use PySimpleGUI are owed a "Thank You" as well, for it is you that has been the inspiration that fuels this project's engine.
The overseas users that post on Twitter overnight are the spark that starts the day's work on PySimpleGUI. They've been a source of positive energy that gets the development engine started and ready to run every day. As a token of appreciation, this readme file has been translated into Japanese.
You've all been the best user community an Open Source developer could hope for.
*Note that all licence references and agreements mentioned in the PySimpleGUI README section above are relevant to that project's source code only. | https://python.libhunt.com/pysimplegui-alternatives | CC-MAIN-2021-43 | refinedweb | 6,440 | 65.01 |
Edit topic image
Recommended image size is 715x450px or greater
Hei all noobs want to ask.
I want to ask you something which make me very confuse right now. Hopefully you’d like to help me.
I have a windows server 2008 R2 with exchange 2010 installed. i have configure it and it running well. I also use the IIS 7.
But i have some problem on the IIS side. By default Exchange will install the OWA in the IIS default website. so if i want to access it, i just go to http:/
Now i have a firewall and 2 ip public lets day x.x.x.67 and x.x.x.68
i want to publish my owa and a website call it "horizon" (this website is on the same IIS and binding in the IIS is 192.168.0.2) for each one.
192.168.0.1 –> x.x.x.67
192.168.0.2 –> x.x.x.68
so the problem is, i want to make the "horizon" website to be able access via internet on x.x.x.68~>192.168.0.2(internal) and the OWA can only be access from x.x.x.67~>192.168.0.1( the OWA is on the “default website”, the binding is *:80, means that whatever is the request to IP addresses on that NIC card it will go through default website first. Tthis makes me confuse because if i change the binding from *:80 to 192.168.0.1:80, the EMC will be failed to load).
Is there eny suggestion regarding to this problem?
3 Replies
Jun 11, 2013 at 9:30 UTC
first tip is to use https:/
next tip is to check cas
http:/
Jun 12, 2013 at 2:00 UTC
Confuse. I neither use ISA or namespace because i just want to make a SNAT in the firewall(port 80 open).
My server has 2(or more) virtual IP.
192.168.0.1 and .2
i just want to assign the OWA to 192.168,.0.1 and other website to 192.168.0.2
SNAT from 192.168.0.1 ~>x.x.x.67 and 192.168.0.2 ~>x.x.x.68
what i actually need is to make the OWA and other website on one IIS but different ip address. As a note, We are a small company, we can olny afford 2 windows server licenses. One for data/file and one for IIS.
Jun 12, 2013 at 6:46 UTC
mate this is exchange 2010, the times where you just published an ip and port for owa is gone.
You need to have a running cas with dns and ssl for this stuff to work.
don't need isa though (where did you get isa?)
Users who spiced this post
It's FREE. Always will be. | http://community.spiceworks.com/topic/346580-owa-and-other-website-on-iis-7-problem | CC-MAIN-2015-06 | refinedweb | 478 | 92.73 |
Cannot use open-iscsi inside LXC container
Bug Description
Trying to use open-iscsi from within an LXC container, but the iscsi netlink socket does not support multiple namespaces, causing: "iscsid: sendmsg: bug? ctrl_fd 6" error and failure.
Command attempted: iscsiadm -m node -p $ip:$port -T $target --login
Results in:
Exit code: 18
Stdout: 'Logging in to [iface: default, target: $target, portal: $ip,$port] (multiple)'
Stderr: 'iscsiadm: got read error (0/0), daemon died?
iscsiadm: Could not login to [iface: default, target: $target, portal: $ip,$port].
iscsiadm: initiator reported error (18 - could not communicate to iscsid)
iscsiadm: Could not log into all portals'
ProblemType: Bug
DistroRelease: Ubuntu 13.04
Package: lxc 0.9.0-0ubuntu3.4
ProcVersionSign
Uname: Linux 3.8.0-30-generic x86_64
ApportVersion: 2.9.2-0ubuntu8.3
Architecture: amd64
Date: Tue Sep 17 14:38:08 2013
InstallationDate: Installed on 2013-01-15 (245 days ago)
InstallationMedia: Xubuntu 12.10 "Quantal Quetzal" - Release amd64 (20121017.1)
MarkForUpload: True
SourcePackage: lxc
UpgradeStatus: Upgraded to raring on 2013-05-16 (124 days ago)
I can't answer all of the questions, but the basic idea is that an LXC container could mount an iscsi target from inside the container with very little if any cooperation from the host's user space.
I believe other similar systems like nbd use ioctl's to configure such devices, but iscsi uses netlink which I believe is the krux of the problem.
@Clint,
Thanks. Then I see three possible workarounds:
1. The simplest way would be to have iscsid running on the host, and connect to it over tcp from the container.
2. You could also have a container without its own network namespace, and have iscsid running there.
3. You could open the netlink socket from the host network namespace, and pass that into the container.
If none of these suffices, then I'll mark this as affecting the kernel, and it'll take a new kernel feature to make this work. However controlling host devices from a container is in general deemed suboptimal (see user namespaces which may not access many devices at all). To solve the netlink part of the issue we would have to come up with a way to choose which containers may access the netlink socket.
It would still be useful for future consideration of this bug if you could attach an strace of the netlink failure to this bug.
Thanks for your reply, I'll chat with Robert and Clint to see if any of these solutions is reasonable for us.
As a reference point, here's the setup we're using:
Host has 2 VMs: An LXC and an qemu VM
The host has the iscsi_tcp module loaded, which then can be seen and used for the iscsi daemon within the container.
Now, what we're attempting to do is provision the qemu VM via the LXC container using OpenStack's baremetal provisioning tools in a virtualized environment (no nested KVM!), so loosely the procedure is: the LXC container boots the qemu image (we have a nifty power driver) and gives it an address via dnsmasq-dhcp, loads up some things via dnsmasq-tftp (this all works) and then we use iscsi to copy data to the qemu VM. Robert or Clint can chime in with more details (or to clarify/correct!).
Today I ran through the test again and connected to the iscsid daemon inside the container for your strace, attached is the output from: strace -p 1488 -o iscsid_
Thanks, that is an interesting strace. What is 192.0.2.69 - is that the host? Could you also start the daemon in the container by hand under strace for a few seconds so we can see exactly how fd 6 is created? (Presumably it is a connection to iscsi_nl_sock, but I'm confused since (a) it managed to get connected and only was refused on send, and (b) if the daemon is talking over tcp then why is it doing netlink at all).
192.0.2.69 is the IP of the qemu VM that the LXC container is attempting to provision.
I'll load up my test instance soon to get that additional strace.
Assuming I should use the init script for this, attached output from running the following in the container:
strace -o iscsi-start.txt service open-iscsi start
Thanks - unfortunately we need the -f flag added to strace to follow forks.
Aha! Attached: strace -f -o iscsi-start_f.txt service open-iscsi start
This bug is missing log files that will aid in diagnosing the problem. From a terminal window please run:
apport-collect 12268saDevices:
total 0
crw-rw---- 1 root audio 116, 1 Jul 31 21:36 seq
crw-rw---- 1 root audio 116, 33 Jul 31 21:36 timer
AplayDevices: Error: [Errno 2] No such file or directory
ApportVersion: 2.14.1-0ubuntu3.2
Architecture: amd64
ArecordDevices: Error: [Errno 2] No such file or directory
AudioDevicesInUse: Error: [Errno 2] No such file or directory
CRDA: Error: [Errno 2] No such file or directory
DistroRelease: Ubuntu 14.04
IwConfig: Error: [Errno 2] No such file or directory
Lspci: Error: [Errno 2] No such file or directory
Lsusb: Error: [Errno 2] No such file or directory
MachineType: Xen HVM domU
Package: lxc
PciMultimedia:
ProcEnviron:
TERM=xterm-
PATH=(custom, no user)
XDG_RUNTIME_
LANG=en_US.UTF-8
SHELL=/bin/bash
ProcFB: 0 EFI VGA
ProcKernelCmdLine: BOOT_IMAGE=
ProcVersionSign
RelatedPackageV
linux-
linux-
linux-firmware N/A
RfKill: Error: [Errno 2] No such file or directory
Tags: trusty
UdevLog: Error: [Errno 2] No such file or directory: '/var/log/udev'
Uname: Linux 3.13.0-32-generic x86_64
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups:
_MarkForUpload: True
dmi.bios.date: 11/28/2013
dmi.bios.vendor: Xen
dmi.bios.version: 4.1.5
apport information
strace -f -o open-iscsi-
This is still an impacting issue, Curious if there has been any progress on it on any front?
Being that apport information was requested I've provided it from my running systems.
The apport information attached to this issue is from within the LXC container.
The host system is:
Ubuntu 14.04.01
Kernel: "3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"
LXC Version: 1.0.4-0ubuntu0.1..
Quoting Jason Harley (<email address hidden>):
>.
I personally won't have time to work on this this year. I'd recommend
simply sitting down and looking through the kernel code, and getting
your bearings through the netlink code for starters. I'm definitely
interested in this and hope to join in early next year.
@Serge, Hi, had a customer ping me about this bug, any updates? Here's his explanation:
You may recall our conversation around iSCSI connectivity for binding block volume with host in OpenStack. The host is Linux container and block volume is on EMC VMAX. The I/O path is over iSCSI. The environment is Ubuntu 14.04 LTS x64. Our observation and KB found from internet is also given.
- Regular attach volume works fine (iSCSI login occurs between compute node and array target). In this context, Compute node is a physical host
- When creating bootable volume, controller node needs to perform iSCSI login for instance to copy. We are seeing issues with this. In this case, Compute node is an LXC container.
- Same isciadm commands that fail within the container, run fine when running outside the container (on physical controller host)
Looks like,
There’s a iSCSI kernel bug with mounting a target within a container
The issue is with multiple namespaces it appears.
KBs found:
https:/
https:/
https:/
Do we know whether this is fixed?
@m.morana - to my knowledge modern kernel's still don't have a namespace aware ISCSI netlink implementation. In an OpenStack context, I recall seeing something about changing nova's volume attach code to use qemu's native iSCSI support which may be a workaround for iscsiadm and native block devices, but I haven't had a chance to look into it myself.
This is a blocking issue for users of iscsi-based storage HW on Openstack; is there any way of re-prioritizing this issue?
Bump if anyone has time to work on this it would be a huge benefit to the OpenStack community.
This is a blocking issue for us too as we're not able to fully use LXC containers in our OpenStack deployments. Specifically we can not run nova-compute in an lxc container due to issues with (RW) AF_NETLINK . In the os-ansible-
As mentioned, I have gear that I can dedicate to testing things out but I don't have time to work through the problems at present.
Quoting M.Morana (<email address hidden>):
> @Serge, Hi, had a customer ping me about this bug, any updates? Here's
> his explanation:
Sorry, no, I have not spent any time on this. As far as I know neither
has anyone on the kernel team, and I haven't seen it discussed on any
mailing lists.
From what I've seen, I'm asked about this once or twice per year, and
it's always deemed low priority. If it's now deemed high priority, then
we will simply need to find a person and time to do it. (I don't know
enough about iscsi to even guess as to the time to do it)
Chris Leach started posted patches that would fix some of what is needed to support this on open-iscsi mailing list (https:/
Thanks for reporting this bug.
Your example command says '$ip:$port'. Is the iscsid running on the host
or in the container? Is $ip the ip of the host?
If $ip is the host ip and you just want iscsiadm in the guest to talk to iscsid on the
host, that should work.
There are several ways depending on your configuration where netlink sockets
might be being attempted. Could you show strace -f output to show exactly
which fails? (iscsiadm itself should only fail if you're trying offload, which it
doesn't look like you are)
Netlink sockets are per-netns, so if you want to be able to connect to a
netlink socket from another netns, then something will need to open a
socket from the target netns and pass that into the other ns. (This
could be arranged with setns, but only from the host). | https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855 | CC-MAIN-2016-40 | refinedweb | 1,751 | 61.06 |
Below is a quick reference guide to Wemos D1 pin mapping for GPIO, I2C and SPI when working from MicroPython.
Pin mapping
The visible pin numbers written on the Wemos D1 does not relate to the internal pin numbering.
In MicroPython you need to use the internal pin numbers to create your
Pin objects and interface with the
pins. The following table shows the internal to external pin mapping — together with the default hardware level functions
for the given pin.
The hardware
I2C positions can be ignored in MicroPython, since the protocol is only available through
a software implementation which works on all GPIO pins.
So, for example if you want to use pin
D6 for output, you could create the interface within MicroPython as follows:
import machine d6 = machine.Pin(12, Pin.OUT)
The Wemos D1 only provides a single analogue input pin, which is accessed by it's ADC (referenced as 0).
import machine adc = machine.ADC(0)
I2C
The I2C ports in the table above reflect the pins where hardware I2C is available on ESP8266. However, this is not currently accessible from MicroPython. Instead MicroPython offers a software I2C implementation accomplished by bit-banging, on any combination of GPIO pins. I usually stick them in the same place regardless —
from machine import I2C i2c = I2C(-1, scl=Pin(5), sda=Pin(4))
The
-1 indicates to MicroPython to use the software I2C implementation.
You can provide any GPIO pins passing in to the
scl and
sda parameters.
i2c = I2C(-1, scl=Pin(12), sda=Pin(13)) # create I2C using ports D6 and D7 for SCL and SDA respectively
SPI
SPI (Serial Peripheral Interface Bus) is a simple protocol, where master and slave device are linked by three
data lines
MISO (Master in, Slave out),
MOSI (Master out, Slave in),
SCK (Serial clock, also see as
M-CLK)
The
SS (Slave select) line is used to control communication with multiple
slaves, for more information see this writeup here.
MicroPython provides both software and hardware SPI implementations both via the
machine.SPI
class.
To use the hardware implementation of SPI, pass in
1 for the first parameter.
Pins
M-CLK,
MISO,
MOSI (and
SS) will end up on the pins shown in the
table (
D5,
D6,
D7 and
D8 respectively).
from machine import Pin, SPI spi = SPI(1, baudrate=80000000, polarity=0, phase=0)
The software implementation can be used by passing
-1 as the first parameter, and
pins to use for
sck,
mosi and
miso.
from machine import Pin, SPI spi = SPI(-1, baudrate=100000, polarity=1, phase=0, sck=Pin(5), mosi=Pin(4), miso=Pin(0)) | https://www.mfitzp.com/reference/wemos-d1-pins-micropython/ | CC-MAIN-2021-43 | refinedweb | 440 | 58.21 |
User talk:The Thinker/Archive 5
From Uncyclopedia, the content-free encyclopedia
Talk Archive 2 | Talk Archive 5
Talk Archive 3 | Talk Archive 6
Notes on the Project, part 79, chapter 3, paragraph 4
Well, it's been quite a long time since we began back in April. All those months and not a single edit conflict. Ha!
Sooo..Now what? We could undertake the DVD page project, which would be mighty even by our standards, or just leave them as is, at least for now. Whaddya think? --THE 23:07, 13 July 2007 (UTC)
- Ya know, after doing the last edit on the commentary, I realized that Sex Seafood is one of the most enjoyable parts about Un for me. We can work on it for hours, or leave it alone for months, and either way it comes out hilarious. I think we should keep going man; its the coolest long-term writing assignment I've ever worked on. :)
- Assuming you're up for it, I'm thinkin' we'll just go about it the way we'd discussed in one of the previous 82 discussions on the subject:
- Lets start creating related articles within our user spaces (your place or mine, baby, I'm good either way.. lol). I just don't want any of our friends in high places mistaking a page like "Sex Seafood: Previews" for vanity, cruft or otherwise; these next ones (like the previews page) will make little sense out of context, I suspect.
- After they're finished, I was thinkin' that it would be a good idea to move them all to UnFilms:Sex Seafood. This will be the DVD Menu page, linking to each article which will be contained in it's namespace (ie. UnFilms:Sex Seafood/Commentary, UnFilms:Sex Seafood/Featurette, etc).
- Besides those, I think now would be a good time to figure out exactly what content we want on this DVD besides the commentary and the documentary. Looking through our discussions, I found my original list of ideas:
The Film (script article) The Commentary (next article)[aw.. now its all grow'd up! :)]")
- Scene Selection (per a different discussion, possibly part 63)
- ...I think they're all still good ideas, and none of them really seem impossible...What do you think? Additions, subtractions? After that, pick one and we'll start phase 8 of this beast! :D --THINKER 23:35, 13 July 2007 (UTC)
- PS - Zombiebaron has agreed to do some pictures. :) --THINKER 00:21, 14 July 2007 (UTC)
- Ah, excellent! Yes, I think it's definitely a good idea to make all future "sex seafood" article sin our namespace (s) until they're ready. And it's great that we've got someone to help us out with pics, and hopefully with formatting the DVD menu, which is beyond my rather primitive knowledge of wiki formatting. --THE 13:00, 14 July 2007 (UTC)
- Oh definitely you could help. Right now is a bit early since we haven't started in with any of these "special features" just yet, and we may change our minds about some of the menu items. The script of the sex seafood will probably end up as "UnFilms:Sex Seafood/feature film", the commentary would probably be "unfilms:Sex Seafood/commentary" and the article about the making of the movie would probably be "unfilms:Sex Seafood/making of". So the menu might have "SEX SEAFOOD:An Exploration in Flim by Unrelated Quotes Guy and Peter Bogdanovich" at the top, then [[UnFilms:Sex Seafood/Feature Film|Feature Film]], [[UnFilms:Sex Seafood/Commentary|Exclusive director's commentary]], and so on. It would also probably need an image, most likely the 'chop Zombiebaron could do. We'll keep you posted.
- Thinker:I have made User:THE/SS, a list of all the DVD pages we plan to do, in the order they'll probably be on in the menu. That way, whenever one of us starts a new one, we can just add it to the list as a means of keeping ourselves organized. I plan to start the "deleted scenes" one today. You should probably start out the "Smiths" one and the "previews" one yourself once the time comes, since those are the ones I'm a little shaky on (as I think we talked about already somewhere in part 38, probably chapter 2) --THE 13:17, 14 July 2007 (UTC)
- THE: Perfection, I think I might have a page or two more to add to the directory (that may or may not make the final cut, we'll see). But as is, the directory works perfectly fine; I'll check back later for progress on the delete scenes and to start the previews. I'm really glad you want to continue blowing this silliness out of proportion with me. ;)
- Lj: Absolutely help! This very large, currently imageless undertaking definitely welcomes the additions. Just hold off on anything until ZB gets his initial stuff done for consistency. In fact, depending on the composite, perhaps ZB and you could share source photos for that same reason (if and where applicable). We'll keep you both updated as we progress, but in that same light, don't expect anything immediately immediately. Though the following pages shouldn't be nearly as time-consuming as the script and commentary page, we still kinda work at a leisurely pace, to keep ourselves sane (I mean, ya type like ahh, epic director Peter Bogdanovich.. sigh.. long enough.. oh, you're bound to start going sane. That is, insane. Same goes for the, erhmm.. quote fellow). :D --THINKER 14:57, 14 July 2007 (UTC)
I've added another deleted scene and have also started the Script I was babbling about on IRC yesterday. It's kind of shaky so far, not sure whether I'll go through with it or not. I'm tryin' to go for a kind of mix of Gorillas in the Mist and Grizzly Man. The "poacher" will probably be somebody with a lawnmower, or it could be people actually shooting blades of grass for their "coats". Feel free to add whatever you want to the script, and also to the deleted scenes page, which is linked to from here. I suppose I'll make "grass in the mist" a semi-national film, so I can whore it in the trailers section :-D --THE 13:43, 15 July 2007 (UTC)
- Heh, Semi-National Films produces only the highest quality nature documentaries.. So Gertrude, when did your son start getting interested in grass?, that made me laugh loudly (is the "grass" supposed to have a double meaning? If not, it probably should).
- I looked over the deleted scenes, and I've got a couple of funny editions for that (good stuff so far). I'm gunna work on those while constructing the previews page. --THINKER 17:19, 15 July 2007 (UTC)
- Trailers has been created. Its just a skeleton right now, but I did put up one preview that I find pretty humorous. And I love the endorsements scene, btw. :) --THINKER 17:46, 15 July 2007 (UTC)
- I'd narrate those. I'm pretty good at emulating a Don LaFontaine-esque voice. Say da word.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 18:03, 15 July 2007 (UTC)
- Possibly. It might lose some of the appeal if its just the narration without all the backing score and other trailory elements, but it might be an option depending on how you pull it off. You're capable in such things though, so if you think you can do up something cool with em, make it so (once the page is complete). :) --THINKER 18:08, 15 July 2007 (UTC)
- Yeah, I possess Audacity and the pompous thought that I can use it well (audacity, if you will :D). I'll be glad to help, though God knows I may be driving before this thing is done.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 18:23, 15 July 2007 (UTC)
- Added another trailer, and I might possibly do the Grass in the Mist trailer today too, depending on how much I get done on the script. I hadn't thought of some double-meaning humor for "grass", but I'll definitely add some subtle double meanings in there now, because that's a good idea. I'm still gonna have it as a cross of sorts between "gorillas in the mist" and "grizzly man", with a gorillas in the mist-esque story line, and a grizzly man-esque style of storytelling (I'll probably even add a "creepy coroner guy" in there towards the end). Ljlego:Audio for the trailers is indeed an interesting idea, but I'm wondering how much of the trailer's meanings would be lost without the visuals we describe in italics. We'll see though. --THE 11:54, 16 July 2007 (UTC)
Well, Thinker, I finished My script, and will add a trailer for it as soon as I finish typing this message. Could you read the script and let me know what you think? Any suggestions for further improvement?
I've added a trailer for Boggy's "Prehistoric women" movie to the trailers page along with the "grass" trailer. I couldn't resist one more excuse to write some more Bogdanovichisms :-D. I'll add some more deleted scenes if I can think of any more, but the four that are already there are the ones I initially had in mind when I created the page. So, you can add yours whenever you're ready. BTW, what do you think of Lj's idea, about making Flash...uh...trailers? --THE 19:37, 18 July 2007 (UTC)
- Hah, that script is rather funny! You have a knack for being humorous very quickly dude; you should keep some of those articles you don't start all the time! I'd say add some pictures into it and put it up on Pee Review, I'll do one of my seemingly-patented in-depth analysis..es.
- The deleted scenes is a page I've been clicking on, looking over quickly, then clicking out of without editing for some reason. I think once the trailers are complete I'll be more apt to going through those, and adding the one or two I have in my head.
- As for the Flash, it all goes back to the same principle: the only thing I care about is quality representing the work. Like everyone's favorite Peter, I'd be very concerned with the representation of our writing in the visual medium. While in print one can create their own vision of the trailer, in animation, it becomes what you see. Now, there are Flash geniuses out there, and it would be awesome to have that kind of thing represented on Un. But if we're talking about pirating copies, it doesn't seem the case. Again, I only encourage assistance in this mammoth undertaking we're working towards, but because of its proportion, I just want it to be awesome when presented to the public. Make it as such, and so it shall be. :) --THINKER 01:14, 19 July 2007 (UTC)
- Thinkerer:I put the script in The piddler for analysisness. No worries about waiting to go through the "deleted scenes" page, hell, I've been saying for...like...two weeks now that I'm gonna work on the "global warming" article and I still haven't done anything to it, my mind just gets fuzzy every time I try and I end up reading old flamewars or eating instead. This is a slow-moving project, as it always has been, and we'll finish when we finish. Ljlego:I agree with the Thinker on the flash thing. If you can make it good enough that it does justice to the written trailers, then go right ahead, but it would be extremely difficult, there's so much stuff in there that would be easy to fuck up on. Trying to represent some of the fairly elaborate trailers, particularly the "Grass" one, and the planet cheese one once it's written, in visual format, and having it be good quality, would be a major undertaking. --THE 15:10, 19 July 2007 (UTC)
Hey, Thinker...
- Hi, Thinker. I'm stuck on this article; could you put some more in there? It's got potential, but I'm stuck at the very beginning. Here's the link: UnScripts:American Colonization: The Musical; thanks! The Humbled Master 13:16, 14 July 2007 (UTC)
- Hey THM. I looked over the article, and I'm not sure I completely understand what you're looking to accomplish with the piece. Are the listed countries supposed to be characters themselves? Because we've got people talking in there that are representative of nations, but it just doesn't correlate to the list of players. Beyond that, what kind of structure do you want to take with the material? For example, is it going to be a satire, sticking closer to the actual colonization, or is it going to be more a parody of colonization itself, with lots of countries mixing it up in a more silly manner?
- Basically, I think I could probably do something funny with it, but since its not necessarily something I would write if left to my own devices, I can't really advance it without a bit more structure; what is there currently is too vague in direction. Work out some of these structural details and I'll add it to my Workinonit section. ;) --THINKER 15:07, 14 July 2007 (UTC)
- All right, thanks for the advice! :) The Humbled Master 15:27, 14 July 2007 (UTC)
- Okay, I've added a bit more. Think you can work your magic? ;) The Humbled Master 16:31, 14 July 2007 (UTC)
- Uh, Thinker? Are you there? :S The Humbled Master 19:06, 14 July 2007 (UTC)
- Patience is a virtue, my friend. Waiting three hours for someone who lives outside of this place (as most of us do) is nothing.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 19:30, 14 July 2007 (UTC)
- Oh. Sorry, Ljlego. Didn't mean to be rude. :( The Humbled Master 20:21, 14 July 2007 (UTC)
Okay, now I think I'm seeing what you're getting at a little more clearly (or at least I have an idea that works within this structure). I'll add it to my to do, get to work on it sometime soon. Do be patient with the edits though, I'm not like a 24/7 on call editor or something.. take a cue from Manforman (and from my buddy Lj there too) :) --THINKER 22:40, 14 July 2007 (UTC)
- All right, I will! Thanks! ;) The Humbled Master 00:08, 15 July 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:The_Thinker/Archive_5?oldid=3081798 | CC-MAIN-2015-27 | refinedweb | 2,496 | 69.11 |
This project shows how to connect the TTGO T-Call ESP32 SIM800L board to the Internet using a SIM card data plan and publish data to the cloud without using Wi-Fi. We’ll program this board with Arduino IDE.
Watch the Video Tutorial
You can watch the video tutorial or continue reading for the complete project instructions.
Introducing the TTGO T-Call ESP32 SIM800L
The TTGO T-Call is a new ESP32 development board that combines a SIM800L GSM/GPRS module. You can get if for approximately $11.
Besides Wi-Fi and Bluetooth, you can communicate with this ESP32 board using SMS or phone calls and you can connect it to the internet using your SIM card data plan. This is great for IoT projects that don’t have access to a nearby router.
Important: the SIM800L works on 2G networks, so it will only work in your country, if 2G networks are available. Check if you have 2G network in your country, otherwise it won’t work.
To use the capabilities of this board you need to have a nano SIM card with data plan and a USB-C cable to upload code to the board.
The package includes some header pins, a battery connector, and an external antenna that you should connect to your board.
However, we had some issues with that antenna, so we decided to switch to another type of antenna and all the problems were solved. The following figure shows the new antenna.
Project Overview
The idea of this project is to publish sensor data from anywhere to any cloud service that you want. The ESP32 doesn’t need to have access to a router via Wi-Fi, because we’ll connect to the internet using a SIM card data plan.
In a previous project, we’ve created our own server domain with a database to plot sensor readings in charts that you can access from anywhere in the world.
In this project, we’ll publish sensor readings to that server. You can publish your sensor readings to any other service, like ThingSpeak, IFTTT, etc…
If you want to follow this exact project, you should follow that previous tutorial first to prepare your own server domain. Then, upload the code provided in this project to your ESP32 board.
In summary, here’s how the project works:
- The T-Call ESP32 SIM800L board is in deep sleep mode.
- It wakes up and connects to the internet using your SIM card data plan.
- It publishes the sensor readings to the server and goes back to sleep.
In our example, the sleep time is 60 minutes, but you can easily change it in the code.
We’ll be using a BME280 sensor, but you should be able to use any other sensor that best suits your needs.
Hosting Provider
If you don’t have a hosting account, I recommend signing up for Bluehost, because they can handle all the project requirements. If you don’t have a hosting account, I would appreciate if you sign up for Bluehost using my link. Which doesn’t cost you anything extra and helps support our work.
Get Hosting and Domain Name with Bluehost »
Prerequisites
1. ESP32 add-on Arduino IDE
We’ll program the ESP32 using Arduino IDE. So, you need to have the ESP32 add-on installed in your Arduino IDE. Follow the next tutorial, if you haven’t already.
2. Preparing your Server Domain
In this project we’ll show you how to publish data to any cloud service. We’ll be using our own server domain with a database to publish all the data, but you can use any other service like ThingSpeak, IFTTT, etc…
If you want to follow this exact project, you should follow the next tutorial to prepare your own server domain.
3. SIM Card with data plan
To use the TTGO T-Call ESP32 SIM800L board, you need a nano SIM card with a data plan. We recommend using a SIM card with a prepaid or monthly plan, so that you know exactly how much you’ll spend.
4. APN Details
To connect your SIM card to the internet, you need to have your phone plan provider APN details. You need the domain name, username and a password.
In my case, I’m using vodafone Portugal. If you search for GPRS APN settings followed by your phone plan provider name, (in my case its: “GPRS APN vodafone Portugal”), you can usually find in a forum or in their website all the information that you need.
I’ve found this website that can be very useful to find all the information you need.
It might be a bit tricky to find the details if you don’t use a well known provider. So, you might need to contact them directly.
5. Libraries
You need to install these libraries to proceed with this project: Adafruit_BME280, Adafruit_Sensor and TinyGSM. Follow the next instructions to install these libraries.
Installing the Adafruit BME280 Library
Open your Arduino IDE and go to Sketch > Include Library > Manage Libraries. The Library Manager should open.
Search for “adafruit bme280 ” on the Search box and install the library.
Installing the TinyGSM Library
In the Arduino IDE Library Manager search for TinyGSM. Select the TinyGSM library by Volodymyr Shymanskyy.
After installing the libraries, restart your Arduino IDE.
Parts Required
To build this project, you need the following parts:
- TTGO T-Call ESP32 SIM800L
- USB-C cable
- Antenna (optional)
- BME280 sensor module (Guide for BME280 with ESP32)
- Breadboard
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematic Diagram
Wire the BME280 to the T-Call ESP32 SIM800L board as shown in the following schematic diagram.
We’re connecting the SDA pin to GPIO 18 and the SCL pin to GPIO 19. We’re not using the default I2C GPIOs because they are being used by the battery power management IC of the T-Call ESP32 SIM800L board.
Code
Copy the following code to your Arduino IDE but don’t upload it yet. First, you need to make some modifications to make it work.
/* Rui Santos Complete project details at Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. */ // Your GPRS credentials (leave empty, if not needed) const char apn[] = ""; // APN (example: internet.vodafone.pt) use const char gprsUser[] = ""; // GPRS User const char gprsPass[] = ""; // GPRS Password // SIM card PIN (leave empty, if not defined) const char simPIN[] = ""; // Server details // The server variable can be just a domain name or it can have a subdomain. It depends on the service you are using const char server[] = "example.com"; // domain name: example.com, maker.ifttt.com, etc const char resource[] = "/post-data.php"; // resource path, for example: /post-data.php const int port = 80; // server port number // Keep this API Key value to be compatible with the PHP code provided in the project page. // If you change the apiKeyValue value, the PHP file /post-data.php also needs to have the same key String apiKeyValue = "tPmAT5Ab3j7F9"; // TTGO T-Call pins #define MODEM_RST 5 #define MODEM_PWKEY 4 #define MODEM_POWER_ON 23 #define MODEM_TX 27 #define MODEM_RX 26 #define I2C_SDA 21 #define I2C_SCL 22 // BME280 pins #define I2C_SDA_2 18 #define I2C_SCL_2 19 // Set serial for debug console (to Serial Monitor, default speed 115200) #define SerialMon Serial // Set serial for AT commands (to SIM800 module) #define SerialAT Serial1 // Configure TinyGSM library #define TINY_GSM_MODEM_SIM800 // Modem is SIM800 #define TINY_GSM_RX_BUFFER 1024 // Set RX buffer to 1Kb // Define the serial console for debug prints, if needed //#define DUMP_AT_COMMANDS #include <Wire.h> #include <TinyGsmClient.h> #ifdef DUMP_AT_COMMANDS #include <StreamDebugger.h> StreamDebugger debugger(SerialAT, SerialMon); TinyGsm modem(debugger); #else TinyGsm modem(SerialAT); #endif #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> // I2C for SIM800 (to keep it running when powered from battery) TwoWire I2CPower = TwoWire(0); // I2C for BME280 sensor TwoWire I2CBME = TwoWire(1); Adafruit_BME280 bme; // TinyGSM Client for Internet connection TinyGsmClient client(modem); #define uS_TO_S_FACTOR 1000000 /* Conversion factor for micro seconds to seconds */ #define TIME_TO_SLEEP 3600 /* Time ESP32 will go to sleep (in seconds) 3600 seconds = 1 hour */ #define IP5306_ADDR 0x75 #define IP5306_REG_SYS_CTL0 0x00 bool setPowerBoostKeepOn(int en){ I2CPower.beginTransmission(IP5306_ADDR); I2CPower.write(IP5306_REG_SYS_CTL0); if (en) { I2CPower.write(0x37); // Set bit1: 1 enable 0 disable boost keep on } else { I2CPower.write(0x35); // 0x37 is default reg value } return I2CPower.endTransmission() == 0; } void setup() { // Set serial monitor debugging window baud rate to 115200 SerialMon.begin(115200); // Start I2C communication I2CPower.begin(I2C_SDA, I2C_SCL, 400000); I2CBME.begin(I2C_SDA_2, I2C_SCL_2, 400000); // Keep power when running from battery bool isOk = setPowerBoostKeepOn(1); SerialMon.println(String("IP5306 KeepOn ") + (isOk ? "OK" : "FAIL")); // Set modem reset, enable, power pins pinMode(MODEM_PWKEY, OUTPUT); pinMode(MODEM_RST, OUTPUT); pinMode(MODEM_POWER_ON, OUTPUT); digitalWrite(MODEM_PWKEY, LOW); digitalWrite(MODEM_RST, HIGH); digitalWrite(MODEM_POWER_ON, HIGH); // Set GSM module baud rate and UART pins SerialAT.begin(115200, SERIAL_8N1, MODEM_RX, MODEM_TX); delay(3000); // Restart SIM800 module, it takes quite some time // To skip it, call init() instead of restart() SerialMon.println("Initializing modem..."); modem.restart(); // use modem.init() if you don't need the complete restart // Unlock your SIM card with a PIN if needed if (strlen(simPIN) && modem.getSimStatus() != 3 ) { modem.simUnlock(simPIN); } // You might need to change the BME280 I2C address, in our case it's 0x76 if (!bme.begin(0x76, &I2CBME)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); } // Configure the wake up source as timer wake up esp_sleep_enable_timer_wakeup(TIME_TO_SLEEP * uS_TO_S_FACTOR); } void loop() {"); // Making an HTTP POST request SerialMon.println("Performing HTTP POST request..."); // Prepare your HTTP POST request data (Temperature in Celsius degrees) String httpRequestData = "api_key=" + apiKeyValue + "&value1=" + String(bme.readTemperature()) + "&value2=" + String(bme.readHumidity()) + "&value3=" + String(bme.readPressure()/100.0F) + ""; // Prepare your HTTP POST request data (Temperature in Fahrenheit degrees) //String httpRequestData = "api_key=" + apiKeyValue + "&value1=" + String(1.8 * bme.readTemperature() + 32) // + "&value2=" + String(bme.readHumidity()) + "&value3=" + String(bme.readPressure()/100.0F) + ""; // You can comment the httpRequestData variable above // then, use the httpRequestData variable below (for testing purposes without the BME280 sensor) //String httpRequestData = "api_key=tPmAT5Ab3j7F9&value1=24.75&value2=49.54&value3=1005.14";(); } } SerialMon.println(); // Close client and disconnect client.stop(); SerialMon.println(F("Server disconnected")); modem.gprsDisconnect(); SerialMon.println(F("GPRS disconnected")); } } // Put ESP32 into deep sleep mode (with timer wake up) esp_deep_sleep_start(); }
Before uploading the code, you need to insert your APN details, SIM card PIN (if applicable) and your server domain.
How the Code Works
Insert your GPRS APN credentials in the following variables:
const char apn[] = ""; // APN (example: internet.vodafone.pt) use const char gprsUser[] = ""; // GPRS User const char gprsPass[] = ""; // GPRS Password
In our case, the APN is internet.vodafone.pt. Yours should be different. We’ve explained previous in this tutorial how to get your APN details.
Enter your SIM card PIN if applicable:
const char simPIN[] = "";
You also need to type the server details in the following variables. It can be your own server domain or any other server that you want to publish data to.
const char server[] = "example.com"; // domain name: example.com, maker.ifttt.com, etc const char resource[] = "/post-data.php"; // resource path, for example: /post-data.php const int port = 80; // server port number
If you’re using your own server domain as we’re doing in this tutorial, you also need an API key. In this case, the apiKeyValue is just a random string that you can modify. It’s used for security reasons, so only anyone that knows your API key can publish data to your database.
The code is heavily commented so that you understand the purpose of each line of code.
The following lines define the pins used by the SIM800L module:
#define MODEM_RST 5 #define MODEM_PWKEY 4 #define MODEM_POWER_ON 23 #define MODEM_TX 27 #define MODEM_RX 26 #define I2C_SDA 21 #define I2C_SCL 22
Define the BME280 I2C pins. In this example we’re not using the default pins because they are already being used by the battery power management IC of the T-Call ESP32 SIM800L board. So, we’re using GPIO 18 and GPIO 19.
#define I2C_SDA_2 18 #define I2C_SCL_2 19
Define a serial communication for the Serial Monitor and another to communicate with the SIM800L module:
// Set serial for debug console (to Serial Monitor, default speed 115200) #define SerialMon Serial // Set serial for AT commands (to SIM800 module) #define SerialAT Serial1
Configure the TinyGSM library to work with the SIM800L module.
// Configure TinyGSM library #define TINY_GSM_MODEM_SIM800 // Modem is SIM800 #define TINY_GSM_RX_BUFFER 1024 // Set RX buffer to 1Kb
Include the following libraries to communicate with the SIM800L.
#include <Wire.h> #include <TinyGsmClient.h>
And these libraries to use the BME280 sensor:
#include <Adafruit_Sensor.h> #include <Adafruit_BME280.h>
Instantiate an I2C communication for the SIM800L.
TwoWire I2CPower = TwoWire(0);
And another I2C communication for the BME280 sensor.
TwoWire I2CBME = TwoWire(1); Adafruit_BME280 bme;
Initialize a TinyGSMClient for internet connection.
TinyGsmClient client(modem);
Define the deep sleep time in the TIME_TO_SLEEP variable in seconds.
#define uS_TO_S_FACTOR 1000000 /* Conversion factor for micro seconds to seconds */ #define TIME_TO_SLEEP 3600 /* Time ESP32 will go to sleep (in seconds) 3600 seconds = 1 hour */
In the setup(), initialize the Serial Monitor at a baud rate of 115200:
SerialMon.begin(115200);
Start the I2C communication for the SIM800L module and for the BME280 sensor module:
I2CPower.begin(I2C_SDA, I2C_SCL, 400000); I2CBME.begin(I2C_SDA_2, I2C_SCL_2, 400000);
Setup the SIM800L pins in a proper state to operate:
pinMode(MODEM_PWKEY, OUTPUT); pinMode(MODEM_RST, OUTPUT); pinMode(MODEM_POWER_ON, OUTPUT); digitalWrite(MODEM_PWKEY, LOW); digitalWrite(MODEM_RST, HIGH); digitalWrite(MODEM_POWER_ON, HIGH);
Initialize a serial communication with the SIM800L module
SerialAT.begin(115200, SERIAL_8N1, MODEM_RX, MODEM_TX);
Initialize the SIM800L module and unlock the SIM card PIN if needed
SerialMon.println("Initializing modem..."); modem.restart(); // use modem.init() if you don't need the complete restart // Unlock your SIM card with a PIN if needed if (strlen(simPIN) && modem.getSimStatus() != 3 ) { modem.simUnlock(simPIN); }
Initialize the BME280 sensor module:
if (!bme.begin(0x76, &I2CBME)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); }
Configure deep sleep as a wake up source:
esp_sleep_enable_timer_wakeup(TIME_TO_SLEEP * uS_TO_S_FACTOR);
Recommended reading: ESP32 Deep Sleep and Wake Up Sources
In the loop() is where we’ll actually connect to the internet and make the HTTP POST request to publish sensor data. Because the ESP32 will go into deep sleep mode at the end of the loop(), it will only run once.
The following lines connect the module to the internet:");
Prepare the message data to be sent by HTTP POST Request
String httpRequestData = "api_key=" + apiKeyValue + "&value1=" + String(bme.readTemperature()) + "&value2=" + String(bme.readHumidity()) + "&value3=" + String(bme.readPressure()/100.0F) + "";
Basically, we create a string with the API key value and all the sensor readings. You should modify this string depending on the data you want to send.
The following lines make the POST request.(); } }
Finally, close the connection, and disconnect from the internet.
client.stop(); SerialMon.println(F("Server disconnected")); modem.gprsDisconnect(); SerialMon.println(F("GPRS disconnected"));
In the end, put the ESP32 in deep sleep mode.
esp_deep_sleep_start();
Upload the Code
After inserting all the necessary details, you can upload the code to your board.
To upload code to your board, go to Tools > Board and select ESP32 Dev module. Go to Tools > Port and select the COM port your board is connected to. Finally, press the upload button to upload the code to your board.
Note: at the moment, there isn’t a board for the T-Call ESP32 SIM800L, but we’ve selected the ESP32 Dev Module and it’s been working fine.
Demonstration
Open the Serial Monitor at baud rate of 115200 and press the board RST button.
First, the module initializes and then it tries to connect to the internet. Please note that this can take some time (in some cases it took almost 1 minute to complete the request).
After connecting to the internet, it will connect to your server to make the HTTP POST request.
Finally, it disconnects from the server, disconnects the internet and goes to sleep.
In this example, it publishes new sensor readings every 60 minutes, but for testing purposes you can use a shorter delay.
Then, open a browser and type your server domain on the /esp-chart.php URL. You should see the charts with the latest sensor readings.
Troubleshooting
If at this point, you can’t make your module connect to the internet, it can be caused by one of the following reasons:
- The APN credentials might not be correct;
- The antenna might not be working properly. In our case, we had to replace the antenna;
- You might need to go outside to get a better signal coverage;
- Or you might not be supplying enough current to the module. If you’re connecting the module to your computer using a USB hub without external power supply, it might not provide enough current to operate.
Wrapping Up
We hope you liked this project. In our opinion, the T-Call SIM800 ESP32 board can be very useful for IoT projects that don’t have access to a nearby router via Wi-Fi. You can connect your board to the internet quite easily using a SIM card data plan.
We’ll be publishing more projects about this board soon (like sending SMS notifications, request data via SMS, etc.) so, stay tuned!
You may also like:
- $11 TTGO T-Call ESP32 with SIM800L GSM/GPRS (in-depth review)
- ESP32/ESP8266 Insert Data into MySQL Database using PHP and Arduino IDE
- ESP32/ESP8266 Plot Sensor Readings in Real Time Charts – Web Server
- ESP32 Web Server with BME280 – Advanced Weather Station
Learn more about the ESP32 with our resources:
- Learn ESP32 with Arduino IDE (Course)
- MicroPython Programming with ESP32 and ESP8266 (eBook)
- More ESP32 Projects and Tutorials
Thanks for reading.
28 thoughts on “ESP32 Publish Data to Cloud without Wi-Fi (TTGO T-Call ESP32 SIM800L)”
Muito Obligado Rui, thanks very much, can we see in a video how do you insert the SIM card in the board?
Also an idea, what about a simple app using app.inventor to send an SMS to this board and according to the SMS it checks a specific sensor and then it sends an SMS back, so the board gets the message and the phone number that sends the request to know the status of a sensor, for example someone wants to know the humidity or the temperature of a far away garden, or send a command to open a valve for 5 seconds for water, and with this we can send commands without the need of a router, which is GREAT !!!! THANKS AGAIN
Hi Tomas.
We’ll post more tutorials about this board: show how to make things happen by sending SMS and request data via SMS.
So, stay tuned.
Thank you for following our work.
Regads,
Sara
Make tutorial about Biometric Attendance System using Fingerprint Sensor and this board.
Thanks for the suggestion!
Excellent as usual
Congratulations.
I was thinking on doing something very similar but with yur tutorial, things will be much easier and fast.
Great saved me hours and hours.
carlos Frondizi
Hi Carlos.
That’s great! Thank you for following our work 😀
Regards,
Sara
Thanks for the very good tutorial. I recently purchased a t-call and was a bit fazed by the documentation. This has spelled it out perfectly. One thought I have had is, if it takes up to one minute to connect to the internet, the esp32 could be awake for up to 50% of the time. Depending on the currency of the data required, it could simply take a reading once a minute, for say 10 minutes and then connect to upload all 10 data points.
This would require keeping a counter of where in the cycle of 10 readings it is. EEPROM I guess would be the best place to keep that variable.
This way uptime would be reduced to about 12%.
Thanks for the great resource.
Do you have a source for the improved antenna?
Hi Paul.
I’ve just added a link to the antenna in the parts required.
Regards,
Sara
Thanks, Sara……Paul
Ótimo tutorial Sara e Rui,
Voces podem nos informar qual o tamanho do pacote de dados, em bytes, enviado em um dia ?
———————————————————————————————
Great Tutorial Sara and Rui,
Can you tell us the data packet size send in one day ?
Hi, the SIM800L only support 2G right? Any possibilities we can the same code for GSM module with 3G/4G connection for improve speed. Thanks.
By the way, great tutorial as always!!
Hi Umar.
That’s right.
The code should be compatible with other modules with just a few changes for proper initialization.
See the TinyGSM library documentation: github.com/vshymanskyy/TinyGSM
Regards,
Sara
Hello Sara and Rui!
A big thanks for your very informative and well organised instruction videos!
At present I am working with the TTGO T-Call unit, and sending data works perfect. I am very interested to know how data can be sent using JSON; do you plan an introductory video on that in the (hopefully near) future?
Regards, Hans
As per other comments, this device and the variants of the Sim 900 are 2G devices and this service is no longer available in my country (Australia) and several others so please also include projects that use 3G or 4G as well
Regards
Phil
Hi Phill.
I’ve added a big note at the beginning of the post about that.
This board is 2G, that’s why it is so cheap compared with other 3G and 4G modules (tha cost between 25$ and 50$ just the module without ESP32).
Unfortunately, 2G is not supported in all countries.t
But, if you want to use a 3G and 4G module, most of the code should be compatible. The TinyGSM library is compatible with a wide variety of modules. You just need to initialize the module with the proper configurations.
We’ll take a look at some 3G and 4G modules and probably create some tutorials in the future.
Regards,
Sara
When the ESP is in deep sleep, are the ancillary devices unit powered down on this device (GSM module, Serial to USB module etc)? Thinking about optimum battery life for in the field devices… Thanks.
To be honest we still need to do more power consumption tests with this board, but you should be able to lower quite significantly with deep sleep as used in this project
Olá, acabou de chegar minha TTGO T-Call ESP32 SIM800L, carregar este firmware o ESP32 fica eternamente dando a seguinte mensagem de erro, em que essa mensagem vai se repetindo sem parar::956
load:0x40078000,len:0
load:0x40078000,len:13076
entry 0x40078a581283 on core 0
Backtrace: 0x4008704c:0x3ffc62c0 0x4008714b:0x3ffc62e0 0x400d1283:0x3ffc6300 0x400e3b20:0x3ffc6330 0x400e3e8e:0x3ffc6350 0x400e41bd:0x3ffc63a0 0x400e3934:0x3ffc6400 0x400e3776:0x3ffc6450 0x400e380f:0x3ffc6470 0x400e385a:0x3ffc6490 0x400e1054:0x3ffc64b0 0x400e1003:0x3ffc64d0 0x400d1b6e:0x3ffc6500
Você sabe como eu poderia solucionar esse problema?
Desculpe pelo post no outro tutorial que não era este
Hi Eduardo.
Those kind of errors are very difficult to troubleshoot.
Sometimes the ESP32 keeps rebooting when we don’t provide enough power.
For example, if you’re using a USB hub, it may not provide enough current. Or try another USB port.
At the moment, I don’t know what exactly can cause that problem.
Regards,
Sara
Hi Sara and Rui!
Congratulations for your tuto.
Im asking about make a mesh network using this module as root node, avoiding to use a router.
It would be posible?
Thanks for your Job.👌
Muito Obligado, I am from El Salvador, and I have a couple of ideas, I found today this “GSM Module SIM800L With MIC & 3.5mm Headphone Jack” is it possible to integrate with the ESP32 to create a very simple mobile?
Here are the ideas:
– voice to text. Sends a verbal instruction from a mobile to an ESP32
– text to voice. An ESP32 sends a SMS to another ESP32, like a temperature warning, but you don’t need to check incoming SMS because you will use speakers.
Hi Sara & Paul,
Great project, and well explained.
Im thinking of using this for my (snail) mailbox , so i can a notification when i receive my newspaper or letters. (my mailbox is far away from my home)
So i will use a small switch to trigger when the mailbox receives mail
Question : how do i set it up that it triggers IFTTT, or will you do a tutorial on that ?
I use Domoticz and that can receive triggers from IFTTT
keep up the good work
Hi Charles.
You can take a look at these tutorials and see if they help.
You just need to make an HTTP post request on the right webhooks URL.
Instead of searching for google sheets in the first tutorial, search for “email”
In both cases, I recommend searching for email as people were reporting some problems with the “gmail” option.
Regards,
Sara
P.S. It’s Sara and Rui (not Paul) 😀
Hi,
The SIM800L module is very power hungry and needs up to 2A (Amperes) when transmitting. The module will reboot and blink 7-8 times when the voltage is dropping due to insuffient Power. This is well described in the datasheet for the SIM800L. Use a Lipo battery charger and Lipo battery to Power this Board. A fully charged Lipo will give 4.1 Volts which is optimal for the SIM800.Max voltage is around 4.3-4.4 volts and it will send out a Message if the voltage is to high.
I also put a 1000 micro Farad capacitor between + and – Close to the module. Datasheets can be very useful for practical use of all kinds of modules and parts.
how i can connect this module with lora module plz ?
Many thanks Rui and Sara for another great tutorial, you have taught me a lot about the TTGO T-Call in an easy and understandable manner. My first remote air quality system using this module has been running well and collecting data.
With my new knowledge I have now successfully migrated over to using a SIM7000G module with NB IoT and an ESP32.
Eagerly awaiting another project!
Hi Richard.
Rui show me your project and it looks great!
Congratulations!
It’s very rewarding see what our readers can build with our tutorials.
Best wishes.
Sara 😀 | https://randomnerdtutorials.com/esp32-sim800l-publish-data-to-cloud/?replytocom=391886 | CC-MAIN-2019-51 | refinedweb | 4,433 | 63.49 |
End-to-end tests for web applications tend to get a bad reputation for failing consistently. A quick online search for "end-to-end tests flaky" yields tons of blog posts about how fragile these kinds of tests are. There are even plenty of posts of people and organizations who give up on end-to-end tests altogether.
This reputation isn't wholly unearned, however. End-to-end tests can be a pain to deal with during development. It comes with the territory, given the ground these tests cover. When lots of interactions and moving parts come into play, a single point of failure can bring everything crumbling down with a big, fat FAILED message.
Still, it gets incredibly frustrating when your tests fail when the functionality under test is the same. There are plenty of reasons why a full end-to-end test can fail for reasons other than functionality changes. One of the main reasons - if not the main reason - for failure is due to simple UI changes.
The way most web testing frameworks do their work is by looking up specific elements on a web page with element selectors. These selectors are often reliant on the implementation of those elements in the markup that generates the page. It means you need to know the element's ID or other attributes like as a class name, so your test knows what it needs.
The problem comes when someone makes a small change to the interface that's being. If a developer changes a specific ID or attribute that the test looks for without updating the tests, it causes the test to fail since it can't find the element. Usually, these UI changes have no bearing on the functionality of the application. These failures are common and lead to wasted time and frustration.
There are also some issues in some modern web applications, where elements get dynamically generated. Since the testers won't know ahead of time how to find a specific element on the page, it becomes messy writing selectors to find one of these dynamic elements. These selectors are also very fragile since they often rely on the page's structure, making it easier to break tests.
Find your elements using Testing Library
To minimize testing issues caused by changes in an application's implementation, a set of utilities called Testing Library can help.
Testing Library is a collection of utilities providing methods that help select elements on a given page in a better way than using ID or classes. Instead of finding elements by a specific selector, you can use more-readable methods like finding input fields by label or selecting a button by its text. These methods minimize the risk of UI changes breaking your tests because it looks up elements in a more "human" way.
Note that it minimizes the risk, not eliminate it. The risk of UI changes breaking your tests is still present with Testing Library. However, with Testing Library, there's a higher possibility that a UI change breaking a test means that something functionally changed.
An example of a potential change in functionality after a UI change is when the text of a button changes. Usually, the text for a button indicates what it does. If that text for the button changes, it might signify a change in functionality. It's an early alert to figure out if the functionality under test needs to change.
Despite its name, Testing Library is not a single library, but more of a family of libraries. Its core library is called the DOM Testing Library, which contains the main methods of querying and interacting with a web page. This library is the basis for using Testing Library in lots of different JavaScript frameworks. There are libraries for React, Vue, Angular, Cypress, and much more.
Using Testing Library with TestCafe
This article covers the basics for getting started Testing Library using TestCafe as our test framework of choice.
A few weeks ago, Dev Tester covered how to get started with TestCafe. The article serves as an introduction to the framework, containing a few examples covering essential usage as a starting point. We'll use those tests to demonstrate how to use Testing Library in TestCafe. You can read the article to learn how to create the tests from scratch, or you can find the finalized code for that article on GitHub.
To begin using Testing Library for our TestCafe tests, we'll need to install and set up the TestCafe Testing Library package. This package allows you to use the Testing Library methods inside TestCafe.
To install the package, all you need to do is to run the command
npm install @testing-library/testcafe inside the directory where the tests are.
After installing the package, you need to set up the library. Testing Library needs to inject some code on the pages under test for its methods to work correctly across different browsers and test environments. To tell TestCafe to inject what Testing Library needs, we need to set up a configuration file.
When running TestCafe tests, the test runner first checks for the presence of the
.testcaferc.json file in the project's root directory. TestCafe applies any configuration settings here to your tests.
In this example, we need to use the
clientScripts setting to inject the Testing Library scripts for all your tests. Create a new file called
.testcaferc.json in the root directory for your tests and save the following:
{ "clientScripts": [ "./node_modules/@testing-library/dom/dist/@testing-library/dom.umd.js" ] }
This configuration setting looks for the necessary scripts from the Testing Library package that we installed and injects them automatically when we run our tests.
With this set up completed, we're ready to use Testing Library. Our TestCafe tests now have the Testing Library API available for use.
Looking up elements with Testing Library
Let's check out how Testing Library works by updating our tests. First, let's use the simple test we have for verifying the Airport Gap home page. This test opens the Airport Gap home page and verifies that it contains an element with specific text.
The test only has one selector, defined in its page model (
page_models/home_page_model.js):
import { Selector } from "testcafe"; class HomePageModel { constructor() { this.subtitleHeader = Selector("h1").withText( "An API to fetch and save information about your favorite airports" ); } } export default new HomePageModel();
Let's change that selector to use Testing Library instead:
import { getByText } from "@testing-library/testcafe"; class HomePageModel { constructor() { this.subtitleHeader = getByText( "An API to fetch and save information about your favorite airports" ); } } export default new HomePageModel();
We made two changes to this page model class. The first change done is importing the
getByText method from TestCafe Testing Library. This method searches for a node on the web page that contains the text content specified when calling the method. We won't use the
Selector method anymore so we can remove that import statement.
The other change was to the
subtitleHeader property. Here, we'll use the
getByText method to find the subtitle using its text. Note that we don't need to search for a specific element as we did before, looking for an
h1 element. Testing Library doesn't care what type of element it is, just what it does. In this case, we want to find something that has specific content.
If you re-run the home page test (
npx testcafe chrome home_test.js), the test passes. Functionally, this test works the same as before. However, the changes are a bit of an improvement. If someone decided to change the element from an
h1 to an
h2 element, the test would break even though the text is still there.
In all fairness, there's still a possibility of tests breaking because of a text change. However, this test is a very simple example and isn't a particularly useful example of a real-world test. Your end-to-end tests should not merely look for some basic text. Still, it's an excellent example to demonstrate how easily an end-to-end test can break and how Testing Library helps minimize these issues.
Filling out forms with Testing Library
Let's do something a little more with Testing Library to demonstrate its usefulness better. The other test we have validates the login functionality of Airport Gap. It loads the login page, fills out and submits the form, then verifies that we logged in successfully.
The page model for this test (
page_models/login_page_model.js) contains four selectors:
import { Selector } from "testcafe"; class LoginPageModel { constructor() { this.emailInput = Selector("#user_email"); this.passwordInput = Selector("#user_password"); this.submitButton = Selector("input[type='submit']"); this.accountHeader = Selector("h1").withText("Your Account Information"); } } export default new LoginPageModel();
Using Testing Library, let's update the selectors and see how it looks:
import { getByLabelText, getByText } from "@testing-library/testcafe"; class LoginPageModel { constructor() { this.emailInput = getByLabelText("Email Address"); this.passwordInput = getByLabelText("Password"); this.submitButton = getByText("Log In"); this.accountHeader = getByText("Your Account Information"); } } export default new LoginPageModel();
Here we have more interesting changes. We're using the same
getByText method we used in the previous test for finding the submit button and account header text. However, we are adding a new method:
getByLabelText. This method works by finding the label with the given name and then looks up the element associated with that label.
Once again, if you run the test, the test passes.
Why look up form elements by label text?
If you check out the Testing Library API, there are other ways to search for input elements, such as
getByPlaceholderText. However, the recommended way to search for input elements by its label, if possible.
Searching for elements by the label has the additional benefit of ensuring that your labels are appropriately associated with form inputs. Having explicit or implicit label associations is essential for accessibility, helping remove barriers for people with disabilities.
For more information on which query is most appropriate for your use case, read the Which query should I use? page in the Testing Library documentation.
Tips for minimizing risk with Testing Library
In all of the examples above, there's still the potential of UI changes breaking a test. For example, if someone changed the label "Email Address" label for the login form to something like "Company Email," the test would fail since it couldn't find the selector.
There are a few tips you can employ to your tests and application to further minimize the risk of implementation changes breaking your tests:
- Use regular expressions instead of looking for exact text. When using
getByTextwith a string, it searches for the exact text by default. However, you can use a regular expression to find a substring instead. For instance, instead of
/email/ito search for an element containing "email" anywhere in its content. Be aware that if you have multiple elements with the same term, your regular expression may not find the element you want.
- Use specific attributes that are less likely to change. Some Testing Library methods, like
getByLabelText, can search for different attributes. For example,
getByLabelTextsearches for the specified string in the
forattribute, the
aria-labelledbyattribute, or the
aria-labelattribute. These attributes are less likely to change compared to searching for the label content itself.
- Use the
getByTestIdmethod. This method looks for elements containing the data attribute
data-testid. This data attribute only serves as an identifier for your tests and won't affect how the element shows up on your page. Since its only use is for looking up elements for testing purposes, the attribute can contain any value and shouldn't need any changes even if the element changes drastically. It's also ideal for pages with dynamic content. The only downside is that you need access to the application's code to set up these attributes in the application itself.
Summary
End-to-end tests tend to be a bit more fragile than other kinds of testing. It's the nature of the beast, given how much coverage these tests provide. However, you can take some steps to reduce failures in your tests.
The methods provided by the Testing Library API help prevent unnecessary test breakage due to implementation changes that don't change your application's functionality. With Testing Library, you can look up elements in a way that's closer to how people look for them on a page. You don't need to worry about IDs, class names, or figuring out how to select a dynamic element.
The examples in this article describe the basics for Testing Library. The changes made to the tests we started with are minimal but cover most of how the library makes your end-to-end tests less prone to failure. In larger projects, the benefits are more apparent. Testing Library saves you and your team lots of wasted time and frustration.
What other problems caused your end-to-end tests to break frequently? How have you dealt with these issues? Let me know in the comments below!
The source code for the examples in this article is available on GitHub.
P.S. Was this article helpful to you? I'm currently writing a book that will cover much more about about the TestCafe testing framework.
With the End-to-End Testing with TestCafe book, you will learn how to use TestCafe to write robust end-to-end tests on a real web app and improve the quality of your code, boost your confidence in your work, and deliver faster with less bugs.
For more information, go to. Sign up and stay up to date!
Top comments (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/dennmart/fix-your-flaky-end-to-end-tests-with-testing-library-26gc | CC-MAIN-2022-40 | refinedweb | 2,280 | 55.74 |
I have been trying to run the iOS unit test from command line. I am using Xamarin Studio on Mac for now, I Created a iOS Unit test project as follows
Add New Project --> iOS --> Tests --> Unit Test App
And added a simple Unit test class and its code snippets as shown below:
using System; using NUnit.Framework; namespace iOSUnitTest { [TestFixture] public class iOSTestSample { public iOSTestSample() { } [Test] public void MySampleTest() { Assert.True(true); } [Test] public void MyFailerTest() { Assert.False(true); } } }
From Xamarin Studio, I am able to run the application which deploys a app into simulator and execute the test cases. I am trying to automate it through a script.
Till now, I am able to get the Unit Test project build and install the app into the running simulator.
I am not sure how to automate the unit test execution once the app is installed.
Answers
Found Blog on Building Xamarin App with Atlansian Bamboo - which covers how we can automate iOSUnitTest Project, It really helped.
Used Touch.Server which can be checkout from Touch.Server Github Link
And using below Command, was able to have the iOSUnitTest project being automated.
Also Refer
Hi @Dinash,
Can you tell me how did you accomplish this?
I did exactly as you said on the first post.
I've already generated the Touch.Server.exe file and now I'm trying to use but I don't understand what and where is the *.app file... how do you get to generate it? is it from the ios unit test project?
Thanks in advance
Got it, .app is the directory.
Now I'm getting this:
[mtouch stderr 17:04:51] warning MT0061: No Xcode.app specified (using --sdkroot), using the system Xcode as reported by 'xcode-select --print-path': /Applications/Xcode.app/Contents/Developer
[mtouch stderr 17:04:52] error MT1212: Failed to create a simulator where type = SimDeviceType : com.apple.CoreSimulator.SimDeviceType.iPhone-4s and runtime = SimRuntime : 10.3 (14E269) - com.apple.CoreSimulator.SimRuntime.iOS-10-3.
[mtouch stderr EOS]
[mtouch stdout EOS]
[12/04/2017 17:04:53] : System.Net.Sockets.SocketException (0x80004005): interrupted
at System.Net.Sockets.Socket.Accept () [0x00039] in :0
at System.Net.Sockets.TcpListener.AcceptTcpClient () [0x00016] in :0
at SimpleListener.Start () [0x00003] in :0
Do you have any idea?
Thanks
Nvm, was missing --device=:v2:runtime=com.apple.CoreSimulator.SimRuntime.iOS-10-3,devicetype=com.apple.CoreSimulator.SimDeviceType.iPhone-7 on the properties of the Touch.Server.exe
Hello @JoaoSilva.7604
I am getting a similar error to what you were getting. I have tried passing in the device and device type but still getting the same error.
mono --debug /Users/nareshthandu/Downloads/Touch.Unit-master/Touch.Server/bin/Debug/Touch.Server.exe --launchsim .app \ --device=:v2:runtime=com.apple.CoreSimulator.SimRuntime.iOS-10-3 --devicetype=com.apple.CoreSimulator.SimDeviceType.iPhone-7 \ -autoexit \ -logfile=testresults_sim.log \ -verbose
Can you please highlight if I am doing anything wrong.
Thanks,
Naresh | https://forums.xamarin.com/discussion/comment/292872/ | CC-MAIN-2019-35 | refinedweb | 492 | 52.26 |
>>.'"
As if enough people weren't already confused... (Score:1, Troll)
--
Censored [blogspot.com] by [blogspot.com] Technorati [blogspot.com] and now, Blogger too! [blogspot.com]
Re:As if enough people weren't already confused... (Score:4, Informative)
Re:As if enough people weren't already confused... (Score:4, Informative)
Agreed it does look to take a lot of the grunt work out of writing parallel-processing code. There are supposedly Java and
Re: (Score:3, Insightful)
Re: (Score:2)
c++ abstracts away from ASM, so is it bad too?
Um, I wasn't saying it was bad, I just meant that referring to something as "transparent" usually implies that it makes it easier to see the implementation beneath. I thought "opaque" was more appropriate, because TBB obscures the details normally associated with writing multi-threaded code. I'm all in favor of abstracting away any details that tend to be tedious or error-prone. Especially when it comes to multi-threading, since AFAIK there haven't been any real breakthroughs in parallel algorithms, so
Re: (Score:2)
Re: ) Moder
Re: (Score:2)
Woohoo (Score:3, Insightful)....
Re: (Score:2, Offtopic)
Try to take my very crappy and unimportant GPLv2 code (note, not GPLv2 or any later version) and relicense it/use it with GPLv3 code and you'll be getting a letter from my lawyer. I dare you to do it to IBM.
Re: (Score:2): (Score:2)
Simply put, anything created BY the software does not matter. The GPL says nothing about that.
If you make a Word document in Open Office, is that document GPL'd? No. It's the same here. The binary that is created does not fall under the GPL as it is merely considered a document.
Again: The GPL3'd GCC can still compile programs that use ANY license, just as the GPL'd GCC can do today. The only dif
Re: (Score:2)
The GPL3'd GCC can still compile programs that use ANY license, just as the GPL'd GCC can do today. The only difference is that you will not be allowed to run the GPL3'd GCC on a device that doesn't comply with the GPL V3's requirements.
Just a minor correction to an otherwise informative post. You will be able to run GPLv3 code on a device that doesn't comply with the GPLv3's requirements, you just won't be able to distribute GPLv3 code on that device. The GPL (v2 or v3) doesn't stop you from modifying or running it however you want, it only puts requirements on your distribution..)
Re: (Score:2)
That depends on what you mean by "copy". Since it uses the "runtime exception", you can link to the library in a program released under any licensing models you want, you just can't modify the library itself and distribute it other than under the GPL.
Re: (Score:2)
Re: (Score:2)
The compilers don't matter... what does matter is that GPL3 code is incompatible with GPL2 code so you cannot copy this code into GPL3 programs you write unless Intel re-licenses it as GPL2/GPL3 code. If they never change the license on it, welcome to the software divide created by the FSF in their zeal to make the GPL3 incompatible with GPL2.
It depends on if Intel licensed this code "under the GPL version 2" or "under the GPL version 2 or (at your option) any later version". If they included the "any later version" option, then it can be included into the GPLv3 GCC. If not, then your statement is correct.
Re: (Score:2)
Re: (Score:2) ot
Re: (Score:3, Informative)
Re: (Score:2, Informative)
No you can:GPL 2 only (Score:5, Interesting)
-nB
Re: (Score:2)
Agreed. The real problem is that many projects lose touch with their contributors, and so can't contact them all and say, "what about moving to this new license?" Of course, even if they could, getting agreement on that would be tough.
Open-Source vs Commercial? (Score:3, Insightful)
Re: (Score:2)
The contrary of open source is closed source.
The contrary of commercial is non-commercial (too many angles on that one: not for profit, public, etc.)
The contrary of Free (libre) is enslaved.
The contrary of Free (gratis) is costly.
Proprietary is the contrary of public domain. (Note that public domain is ONE of the contraries of commercial...but not an exact match. Most commercial activity requires non-public-domain material or informational components.)
I'm glad to hear it (Score:5, Informative)
Re: (Score:2)
I think you're getting confused. Once threads are created they're scheduled by the OS whether they like it or not. An app can't do its own scheduling other than simply halting or not halting a thread though obviously it can decide when to create/destroy threads or allocate data to specific threads.
Re: (Score:2, Informative)
Re: (Score:2)
Looks good, but a little hampered by C++ (Score:5, Insightful)
But. As much as I love C++ ( and I do ) the real weakness is the lack of usable closures/lambda. The parallel_for example requires you to pass a functor to execute on ranges, which is fine, it makes sense, but since you can't define the closure in the calling-scope in C++ you end up filling your namespace with one-off function objects.
This is not a critique of TBB, but rather of C++. In java I can make an anonymous subclass within function scope. In python and hell even javascript I can make anonymous functions to pass around. But in C++ I can't, and this means that my code will be ugly.
Not that this is new news. I use Boost.thread for threading right now, and most of my functors are defined privately in class scope ( which is, at the very least, not polluting my namespace ) but it's too bad that I don't have a more elegant option in C++.
That being said, Boost.lambda makes my brain hurt a little, so my complaints are really just a tempest in a teacup. If I were smarter and could really grok C++ I could probably use Boost.Lambda and this would be a non-issue.
Re: (Score:2, Insightful)
Besides how hard is it to multicore manually, you can either subdivide a major loop, if its warranted, if it lasts 1us then its useless or
you might as well subdivide at the highest level. ie AI/AUDIO/3D
Javascript, even if running on 16 5ghz cores, would still be slower than 1 core 3ghz, so its a mute benefit of its 'magic functions'
I wouldn't want to depend on a generic system to make my random function appear faster, rather design it we
Re: (Score:2)
i rather have the other core free
I don't know your setup, but you've made the schoolboy error of assuming that everyone has 2 cores. I suppose in the future, you'll be the guy complaining that your new 64-core CPU only uses 2 of them, "why can't app writers figure out how many cores I have and use them all"
You don't need a core free to run apps, and having functors is a well established C++ paradigm for creating code, they're not any worse than calling a simple C function (even if they look strange sometimes - the compiler does all the wo
Re: (Score:2)
Well with a library such as this your code doesn't have to keep track of how many threads it is supposed to use or how many are available. You just write some parallel loops/functions and the library will scale the # of threads accordingly. I don't believe it would be all too difficult to explicitly tell the lib the number of threads to use (N, N/2, N-1, etc. wh
Re: (Score:2)
Re: (Score:2)
Then I guess you'll be happy to hear that the proposal [open-std.org] for lambda expressions is well on its way to getting included in C++09..
A job for Fortran . . . (Score:3, Informative)
Fortran 90 and later already have the structures for this (Forall, etc).
*sigh*
hawk, who hasn't written a line in over two years
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Local _functions_ aren't in C++, but may be a GCC extension - which might be confusing you.: (Score:2)
No - you'd need to ask someone close to the history of the standards process.
There are / have been proposal(s) to remove this limitation. The standards process grinds along pretty slowly - but it may happen, one day.
Re: (Score:2)
Re: (Score:2)
This is an error in C++ (although your compiler might support it as a non-standard extension)
class local {
public: void hello() { printf("hello world\n"); }
};
local::hello();
Must either declare 'hello' static, or call it as:
local().hello();
Oh, and if you are worried about cluttering up "the namespace", that's what namespace MySpace { } is for
Actually that's what
Re: (Score:2)
The interesting paper on adding lambda/closures to C++ looks very like it maps a lambda to a function object. The key here, is that the function object cheats by being on the heap so you have access to copies of captured variables when you call the closure. Not sure how this would work if you tried to communicate between two closures using t
Great news! (Score:2, Interesting)
This and XEN (Score:2)
Question: With this now GPL2 and open source, will this fix one of the problems of XEN?
XEN can only be run on certain processors when used with particular OSes, XP, namely. And, as I understood it, it was because of the threading. If XEN incorporates this into their system, will this open the door?
Re: (Score:3, Interesting)
Re: (Score:2)
Difficult to implement (Score:2, Interesting):
Re: (Score:2)
Well, yeah, considering it's an Intel software product, that Intel originally released under a closed-source license and probably charged a nominal fee for. (Intel's software is used to promote their hardware, after all, so even if they give it away for free, they don't lose out since their li
Re: (Score:2)
Since its GPLv2 rather than closed, to the extent that it is a useful library and easier to adapt to other processors/OS's than implement the API or an equally useful one from scratch, there is at least the potential of community-driven implementations for other environments.
Re: (Score:2)
If there is not technical limitation to the use on other processors and Intel just didn't warrant or claim it works on them, then it might work well with them, you just need a way to find out for sure. I ma guessing this might lead to a designed and tested for Inte
Re: (Score:2)
Re:Compatibility kinda sucks (Score:4, Informative)
Re: (Score:2): (Score:2)
While that's true, I assume that they would have listed one of the BSDs if they knew it worked on it. As far as I know, it required specific thin: (Score:2)
"Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed."
Thus, I expect that a court would find that Intel would be bound to the verbatim GPLv2 (which has "or any later version") unless they specifically say something of the kin of "modified GPLv2" wherever they mention the license they're using, and particularize the modifications prominently in their version.
They would also be
Re: (Score:3, Informative)
I would not. The verbatim GPLv2 states:
If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions eithe
Re: (Score:2)
No, if anything, in that case, Intel would be guilty of copyright violation on the license. Forcing them to adhere to the original license (assuming their product doesn't incorporate anything licensed under the unmodified license) would be an extraordinarily improbable remedy, in that case.
But, anyway, you're wrong: the GPLv2 doesn't contain an "or any later version" clause, it has a pro
Re: (Score:2)
No, if anything, in that case, Intel would be guilty of copyright violation on the license. Forcing them to adhere to the original license (assuming their product doesn't incorporate anything licensed under the unmodified license) would be an extraordinarily improbable remedy, in that case.
Incidentally, adherence to the original license would be a shield against litigation by Intel. In other words, Intel would not be permitted to rely upon modifications to the contr
Re: (Score:2)
I don't think that works if the original license wasn't the one Intel distributed. If Intel distributed the original license and tried to foist a modification off, promissory estoppel might limit the effect given to the revision (or at least a shield against som
Re: (Score:2)
I think you understand and are getting at what I was hoping to be able to convey.
If Intel said it was using the unmodified GPLv2, someone relied upon the unmodified GPLv2 e: (Score:2)
The easy way to answer this question is to compile and run a sample application I suppose...
Re: scal
"open-source" != "non-commercial" (Score:2)
The antonym of "open-source" is "closed-source" or "proprietary". Anyone telling you you can't use and distribute GPL'ed software commercially is in violation of the GPL.
Re: (Score:2)
Anyone telling you that is wrong, perhaps, but the GPL doesn't impose a universally-applicable limit on free speech which makes anyone who says it a violator of the GPL. (Someone distributing software that they got under the GPL who asserts a license term say you can't use their distribution/modification of that GPL software in commercial software might be in violation of the GPL, but most people who say
Re: (Score:2)
Re: (Score:2, Insightful)
That said, I'm sure most CS courses teach at least the basics of memory management, but people are still happy to rely on the Java garbage collector
Re: (Score:2)
Re: (Score:2)
I'm still waiting for someone to explain to me why this isn't even touched on in most CS programs.
In fact, I'm still waiting for someone to help me understand it.
:\
Re:task based then thread based (Score:5, Funny)
Obviously you are in the those who don't group.
Re:task based then thread based (Score:4, Funny)
The then/than mixup is kind of funny though. Reminds me of something I read in the engineering faculty on a white board (I assume a first year engineer):
"I'd rather be retarded then do my engineering homework.."
Looks like he had the pre-requisite fulfilled and should have just got on with the homework.
Re: (Score:2)
And obviously it's your fault.
Re: (Score:2)
Re: (Score:2, Funny)
Re: (Score:2)
1) those who don't use zero-based array indices, and
1) those who do
Re: (Score:2)
Re: (Score:2)
0 = 0
1 = 1
10 = 2
11 = 3
and so on
or am I mistaken?
Re: (Score:2)
Try it with the columns having 5,3,1,1 as their values instead of 8,4,2,1
Re:I'm thinking (Score:5, Informative)
And, if there was, well it's under the GPL now, and I'm sure someone would have added / corrected that mistake.
Re: (Score:3, Insightful), Mi:PS3? (Score:4, Informative)
Re: (Score:2)
Re: (Score:2, Informative) [intel.com]
Re: (Score:2)
Re: (Score:2) | https://developers.slashdot.org/story/07/07/25/1324221/intel-releases-threading-library-under-gpl-2?sdsrc=prevbtmprev | CC-MAIN-2018-05 | refinedweb | 2,662 | 66.88 |
Custom template tags in DjangoAug 15, 2008 Django Tweet
In various places in my project's site navigation, I need to be able to include dynamic content. For example, there will be a sidebar on the left that should always show a list of categories, with each one linking to its respective category page. A pretty common need on most web sites, right?
The category model is already defined in another app. Since I'll need this snippet to be available in templates throughout the project (for now it's just in the lefthand nav, but I predict we'll use it in other places) I decided to create a template tag that I can plug in anywhere.
The Django documentation covers the very basics of writing custom template tags, but the example they use is for a date/time tag - they don't go into detail about how to work with objects you've defined within your project:
Extending the template system (This stuff is also covered in the Django book, Chapter 10: Extending the Template Engine.)
It's worth skimming over the part about custom filters - you might find yourself referring back to it later. Or you can plunge right in here:
Writing custom template tags
You'll notice that there are a few different ways you can write/register your custom tags: a regular tag that relies on a Node subclass, a simple_tag, or an inclusion_tag (a template tag "that displays some data by rendering another template").
The inclusion tag is exactly what I needed for my navigation piece.
I started by adding a new app, named 'navigation', to my project, and adding it to the settings file:
The navigation app might never contain anything but these custom tags, and that's okay.
Here's what it does need to have:
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.admin',
'django.contrib.humanize',
'myproject.registration',
'myproject.tracking',
'myproject.articles',
'myproject.navigation',
)
I put my tags in a file called menu_tags.py (this name can be whatever you choose - it's the name you'll refer to when you load the tags into a template later).
Import the Django template module and register, like so:
navigation/
__init__.py
templatetags/
__init__.py
menu_tags.py
category_list.html
And since I'm also working with an object from elsewhere in my project, I imported that model:
from django import template
register = template.Library()
My method couldn't be simpler:
from articles.models import Category
I'm not even taking any arguments, just grabbing a list of all my active categories and returning them as a dictionary.
When I register the tag, I'm rendering the results in a template called 'category_list.html' (it can live in the same templatetags/ folder - and mine does, for now). The second part binds the nav_categorylist() method to the tag:
def nav_categorylist():
categories = Category.objects.filter(active=1)
return {'categories': categories}
The template, also absurdly simple:
register.inclusion_tag('category_list.html')(nav_categorylist)
Now the 'nav_categorylist' tag is available to any template throughout my project.
All the way back up in my project's base template, where all of my content blocks are defined, I'm first loading the custom tag library like so:
<div>
<div>Categories: </div>
{% for category in categories %}
<a href="/offerings/list/cat/{{ category.id }}/">{{ category.title }}</a><br />
{% endfor %}
</div>
Then calling the tag:
{% load menu_tags %}
<div id="leftnav">
{% block left_nav %}
{% nav_categorylist %}
{% endblock %}
</div> | http://www.mechanicalgirl.com/post/custom-template-tags-django/ | CC-MAIN-2020-16 | refinedweb | 578 | 50.46 |
I've found the Bloom::Filter module in CPAN but can't get it to work and am also worried about what level of false-positives I'm facing.
My current code is:
my $bloom_filter = Bloom::Filter->new(error_rate => 0.001, capacity =>
+ 30000000);
if ($bloom_filter->check($account_number)) {
... do deduping ...
} else {
$bloom_filter->add($account_number);
... do something ...
}
[download]
I'm looking for wisdom on two fronts:
Thanks.
From the code, I think any string will do as a salt. All it's doing is passing it as the second argument to sha1(), and all that does is append it to the first argument before hashing. The reason it talks about salts plural is that instead of using different hashing algorithms, it gets its vector coordinates by reusing the same algorithm with a different addendum. Throw some arbitrary but consistent four letter words at it and see what happens.
More generally, it seems like a neat approach but will require at least two passes to really work. Your second pass will have to weed out the false positives by rescanning the data set with a simple hash-based tally restricted to the output of the first pass. In that case you can choose the accuracy based on the number of false positives you can afford to screen for, and even 0.1% would give you a fairly manageable hash of 30,000 keys.
Incidentally, I'm no pack() leader, but if I were you I would think about rolling your own using only some bits of Bloom::Filter. It's biased towards working with email addresses, which are very heterogeneous, and there may well be regularities in your data set (they are IDs, after all) that make other algorithms much more efficient and discriminatory than just applying and reapplying the same hashing routine.
update: according to the perl.com article, with 30million keys and 0.1% accuracy, you need 10 hash functions to get the most memory-efficient filter. For 0.01% it's 13 functions and 30% more memory use, and so on:
my ( $m, $k ) = optimise(30000000, 0.0001);
print <<"";
memory usage = $m
hash functions = $k
sub optimise {
my ( $num_keys, $error_rate ) = @_;
my $lowest_m;
my $best_k = 1;
foreach my $k ( 1..100 ) {
my $m = (-1 * $k * $num_keys) /
( log( 1 - ($error_rate ** (1/$k))));
if ( !defined $lowest_m or ($m < $lowest_m) ) {
$lowest_m = $m;
$best_k = $k;
}
}
return ( $lowest_m, $best_k );
}
[download]]
Divide and conquor. You say you have two fields that are the key, one of which is unique. Take the right hand digit of the number and sort your records into 10 files by that digit. (Insert hand waving about it probably working out that this means you end up with roughly even size output files.) Now do your dupe checks on the resulting files. The thing to remember about perl hashes is that they grow in powers of two, that is they double when they are too small. So divide your file sufficiently that you stay within reasonable bounds. Divide by 10 has worked for me with equivelent sized data loads.. :-)
Oh, another approach is to use a Trie of some sort. If your accounts are dense then overall it can be a big winner in terms of space and is very efficient in terms of lookup.
First they ignore you, then they laugh at you, then they fight you, then you win.
-- Gandhi. :-)
I don't think you'll be able to beat this kind of databases, in their own game.
Er, maybe you mean this in a way I misunderstand but the algorithms that you are talking about dont split the data up. They apply an order to the data sure, and they deal with records more or less singly, but they dont split the data up.
As for the other point, well, I guess unless somebody bothers to benchmark we wont know. :-) Unfortunately right now I dont have the time.
if you're running updates in batches don't forget about quickish stuff that might work.
perl -le 'for(1..30_000_000){$x=int(rand(30_000_000));print $x;}' >/tm
+p/randnums
time sort -n /tmp/randnums > /tmp/randnumssorted
real 2m0.819s
user 1m52.631s
sys 0m2.798s
# used about 200m memory
time uniq -c /tmp/randnumssorted > /tmp/randuniq
real 0m11.225s
user 0m8.520s
sys 0m1.019s
time sort -rn /tmp/randuniq >/tmp/randuniqsort
real 1m0.062s
user 0m41.569s
sys 0m3.125s
head /tmp/randuniqsort
10 7197909
10 6080002
10 2718836
10 21596579
9 8257184
9 8116236
9 7721800
9 7706211
9 7657721
9 7490738
[download]
pull out your account numbers, sort/uniq to find duplicates. takes about 3 minutes and 200m memory.
there's nothing wrong with going over the file twice if it makes it easier to process..
A party
An organised event
A traditional gathering
With family and friends
Home alone
I don't celebrate the New Year
Adjusting my clocks for the Leap Second
I can't remember
Other
Results (177 votes). Check out past polls. | http://www.perlmonks.org/index.pl?node_id=346619 | CC-MAIN-2018-05 | refinedweb | 829 | 73.37 |
in reply to Use of "die" in OO modules
It is also difficult to come up with a consistent failure convention. Using undef is a bad idea as it is true in list context (any value is), and some functions will return undef as a valid response. -1 is less appropriate. You could bless an error object and return that, but that strikes me as an ugly solution.
I prefer to use Exception::Class, or if I am feeling lazy then die will do. Perl Best Practices has an excellent chapter on Error Handling (13) - I highly recommend it.
Thanks for your input, though I should have specified one thing more clearly: when I said I return (undef), I meant that I just return;, I don't return undef;. I'm aware of the problem with returning undef explicitly.
(For the unaware: a bare return gives the caller an undef, regardless of whether the call was in scalar or list context. Returning an explicit undef gives the caller undef in scalar context and a list containing one element (undef) in list context.)
. | http://www.perlmonks.org/?node_id=595322 | CC-MAIN-2018-17 | refinedweb | 182 | 70.23 |
README
BotscriptenBotscripten
A modified Trialogue/Twine engine specifically for Building, Testing, and Exporting conversations as Minimal HTML5
Upgrading? Check the Changelog
Botscripten is a chat-style Twine Story Fromat based on Trialogue and its predecessors. Unlike other Twine sources, Botscripten is optimized for an external runtime. That is, while you can use Botscripten for Interactive Fiction, that's not this story format's intent.
Botscripten is also available as an npm parser, able to handle Passage-specific features found in the Botscripten format. It's available via
npm install @aibex/botscripten or
yarn add @aibex/botscripten.
✅ You want to use Twine to author complex branching dialogue
✅ You want a conversation format (think chatbot)
✅ You want simple built-in testing to step through flows and get feedback
✅ You want a minimal output format for an external runtime
If "yes", then Botscripten is worth looking into.
Botscripten comes with two distinct flavors: An Interactive Output for testing and stepping through conversations in a pseudo chat interface based on the Trialogue code, and built in proofing version. External JavaScript keeps the output file small, making it easy to use the pure HTML in other systems.
- Botscripten
- 🚀 Setup and Your First "Chat"
- 🏷 Botscripten Tags
- 🗂 Recipies
- 📖 Node Module Documentation
- ⚠️ Why would you use Botscripten over (Insert Twine Format)?
- Developing on Botscripten
- Acknowledgements
🚀 Setup and Your First "Chat"🚀 Setup and Your First "Chat"
Add Botscripten as a Twine Story FormatAdd Botscripten as a Twine Story Format
- From the Twine menu, select
Formats
- Then, select the
Add a New Formattab
- Paste
- Click
Add
Once you've done this, you will have access to the Botscripten story format in Twine. If you're migrating, be sure to check the Changelog for a migration guide.
Upgrading is as simple as removing your old Botscripten and adding the new URL above. Any stories you publish will automatically work in the new format.
(If you are interested in the
next version of botscripten, you may use as your story format URL)
Create your first chat storyCreate your first chat story
- Create a story in the Twine editor.
- Set your story format to
Botscripten
- Edit the start passage to include:
- Title (e.g. start)
- Passage text (e.g. "Hi 👋")
- One or more links (e.g.
[[What's your name?]])
- Speaker tag (e.g.
speaker-bot). This will display the speaker's name (in this case
bot) in standalone viewer
- Edit the newly created passage(s) to include:
- Passage text (e.g. "My name is Bot")
- One or more links (e.g.
[[Back to start->start]])
- Speaker tag (e.g.
speaker-bot)
- Hit
Playto test the result
🏷 Botscripten Tags🏷 Botscripten Tags
Botscripten is designed to work exclusively with Twine's tag system. That means no code in your conversation nodes. This is important because behind the scenes, many other Twine formats convert Passages containing
<% ... %> into JavaScript code, defeating the goal of portability.
The following passage tags are supported by Botscripten. It is assumed that anyone consuming a Botscripten formatted Twine story will also support these tags.
To maintain compatibility with the Twee 3 Specification, the tags
script and
stylesheet should never be used.
The Botscripten story format allows for simple comments. Lines beginning with an octothorpe
# are removed from chat lines when playing a story, but remain in the source code for external tools.
If you'd like to place a comment across multiple lines, you can use a triple-octothorpe
###. Everything until the next
### will be considered a comment.
The following are all comments in Botscripten:
# I'm a comment, because I have a "#" at the start of the line # It can # cover # multiple lines ### You can also use a triple # to denote a block and everything is ommitted until the next triple # ### ### If you need a literal #, you can escape it with a backslash like this: \### ###
🗂 Recipies🗂 Recipies
Below are some common challenges & solutions to writing Twine scripts in Botscripten format
"Special" Comments (Directives)"Special" Comments (Directives)
If you look at the sample, you'll notice many of the comments contain an
@yaml statement. While
Botscripten (viewer) doesn't care about these items (they're comments after all), any other system parsing the Twine file can read these statements out of the comment blocks. Additionally, if you use botscripen's npm engine, you'll have access to these special comments as part of the story parsing.
These special comments are called Directives and they consist of the comment identifier (
# or
###) immediatly followed by
@ and a
word. These are all Directives:
#@doAThing #@make party ###@sql INSERT INTO winners (name, time) VALUES ('you', NOW()) ###
Anyone parsing Botscripten Twine files can assume that the regular expressions
/^#@([\S]+)(.*)/g (inline) and
/^###@([\S]+)([\s\S]*?)###/gm (block) will match and extract the directive and the remainder of the comment.
For consistency between systems, directives should be run when a Passage is parsed, but before any tag behavior (such as
wait or
speaker-* are applied) This allows directives to form opinions about the Passage and it's output before rendering occurs.
There is no set definition for directives, as adding a directive to Botscripten would require every external parser to also support it. This is also why Botscripten is so light- there's almost no parsing being done of the individual Passages.
But if you'd like some examples, these are some directives we think are pretty useful and are worth implementing in your own conversation engine:
#@set <name> <value>- A directive that sets a local variable
<name>to value
<value>within the conversation
#@increment <name> <amount>- A directive to increment a local variable
<name>by amount
<amount>
#@end- A directive that tells the system to end a conversation (don't put any
[[links]]in this passage obviously!)
Conditional Branching (cycles, etc)Conditional Branching (cycles, etc)
Since Botscripten does not maintain a concept of state, nor have a way to script items such as cycling or conditional links, you should present all possible branches using the
[[link]] syntax. This will allow you to view all permutations in Botscripten when testing conversations locally.
Conditional branching can then be implemented as a Directive. This gives you control outside of the Twine environment as to which link is followed under what conditions. We're partial to a
###@next ... ### directive, but feel free to create your own!
Scripting Directives in BotscriptenScripting Directives in Botscripten
If you absolutely want to handle Directives in Botscripten, you can do so by selecting
Edit Story JavaScript in Twine, and registering a handler for your directive. For example, this logs all
@log directives' content to the developer tools console.
story.directive("@log", function (info, rendered, passage, story) { console.log("LOG data from " + passage.id); console.log("Directive contained: " + info); return rendered; // return the original (or altered) output });
Directives are evaluated after the Passage is parsed, but before any tag behaviors are applied.
📖 Node Module Documentation📖 Node Module Documentation
Most individuals are interested in writing for the Botscripten format, not consuming it. If you are looking to read Botscripten's Twine HTML files, and are also in a node.js environment, you can install Botscripten over npm/yarn and access the parser. Parsing a valid Botscripten HTML file will yield the following:
import botscripten from "@aibex/botscripten"; import fs from "fs"; const story = botscripten(fs.readFileSync("your/file.html").toString()); story = { name: "", // story name start: null, // name ID of starting story node startId: null, // numeric ID of starting story node creator: "", // creator of story file creatorVersion: "", // version of creator used ifid: "", // IFID - Interactive Fiction ID zoom: "", // Twine Zoom Level format: "", // Story Format (Botscripten) formatVersion: "", // Version of Botscripten used options: "", // Twine options tags: [ { // A collection of tags in the following format... name: "", // Tag name color: "", // Tag color in Twine }, // ... ], passages: { // A collection of passages in the following format... // pid is the passage's numeric ID [pid]: { pid: null, // The passage's numeric ID name: "", // The passage name tags: [], // An array of tags for this passage directives: [ { // An array of Botscripten directives in the following format... name: "", // The directive name content: "", // The content in the directive, minus the name }, // ... ], links: [ { // Links discovered in the passage in the following format... display: "", // The display text for a given link target: "", // The destination Passage's name }, // ... ], position: "", // The Twine position of this passage size: "", // The Twine size of this passage content: "", // The passage content minus links, comments, and directives }, // ... }, passageIndex: { [name]: id, // A lookup index of [Passage Name]: passageNumericId //... }, };
⚠️ Why would you use Botscripten over (Insert Twine Format)?⚠️ Why would you use Botscripten over (Insert Twine Format)?
First off, every Twine format I've worked with is amazing and super thougtful. If your goal is to create interactive fiction, self-contained tutorials, etc, you should just use Trialogue, Harlowe, or Sugarcube. However, if you're using Twine as a conversation editor (and you are more interested in the
tw-passagedata blocks and the data structure behind Twine) Botscripten may be for you.
- Zero
story.*Calls To be as portable as possible, No template tags may be used. That means your code cannot contain the
<% ... %>blocks seen in Trialogue/Paloma. These tags are incredibly difficult to parse/lex, because they assume a JavaScript environmemt at runtime. And since you don't know where your Twine file is going to run, you must decouple the programming from the data.
- Tags drive behavior Because of that first restriction, we need a way to perform actions within Botscripten. Thankfully, Twine's Tag system is up to the task. We strive to keep the tag count low to minimize the number of reserved tags in the system.
- Dev Experience Iterating on Twine templates is hard. A lot of time was spent to make the dev experience as simple as (1) put tweego in your executable path, and (2) type
npm run dev.
- Multiple Formats Botscripten provides two syncrhonized formats from the same repository. Features in the proofing / html5-min version will also show up simultaneously in the Interactive one.
Developing on BotscriptenDeveloping on Botscripten
Local DevelopmentLocal Development
- Acquire tweego and place it in your development path.
- run
npm installto install your dependencies
- run
npm run devto start developing using the twee files in the
examplesfolder
- Examples are available under
- TEST_Botscripten can be installed in Twine from
- When you are done developing/testing, be sure to remove the TEST_Botscripten format. If you forget, just restart the dev server so Twine doesn't complain every time you start it up
For local testing convienence, we have a
npm run tweego command. It ensures that Botscripten is in the
tweego path before performing a build.
As an example, the sample document was converted from Twine to Twee using the command
npm run tweego -- -d ./stories/sample.html -o ./examples/sample.twee. (You may need to manually edit the html file to set the format to "Botscripten")
AcknowledgementsAcknowledgements
Botscripten would not be possible without the amazing work of Philo van Kemenade for imagining Twine as a conversational tool, M. C. DeMarco who reimagined the original "Jonah" format for Twine 2, and Chris Klimas creator and current maintainer of Twine.
Botscripten is sponsored by Aibex. Love your career. | https://www.skypack.dev/view/@aibex/botscripten | CC-MAIN-2022-27 | refinedweb | 1,846 | 61.16 |
If that title sounds cryptic, let me show you the class structure (in API) that I have to work with:
public class Model {
public final void foo() {
// breakpoint is here
}
}
public class View1 {
private Model view1Model = new Model() {
...
}
}
public class View2 {
private Model view2Model = new Model() {
...
}
}
As you see, Model is the superclass of both anonymous classes.
I want to change the breakpoint to a conditional breakpoint that breaks only when the outer class of the anonymous class of `xxxModel` is `View2`. The class Model does not hold a reference to the outer class.
When broken at the breakpoint, I can see in the inspector that `this.this@0 == View1@1441` and I can add this phrase (`this.this@0`) as a watch (even though it's colored red as if an error, it works.) However, if I add that same phrase to the breakpoint then the VM cannot handle it and I get a breakpoint error when the breakpoint is reached.
[edit] I meant anonymous classes, not inner (although inner classes of Model also appear in the API)
The need for this has subsided, as I've debugged my way through the API with more effort, but it would be nice to eventually have an answer anyway.
I suppose you can try using conditional breakpoint with the condition that checks the class name like this.getClass() or use wildcards in the class filter.
Thanks, that seems to work. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/360001718640-Conditional-breakpoint-on-outer-class-of-superclass-of-anonymous-class | CC-MAIN-2022-33 | refinedweb | 239 | 70.33 |
The C# reflection package provides a powerful introspection mechanism that allows class information to be obtained dynamically at run time. However it has a shortcoming in the form of not having dynamic proxy support.
There are instances when functionality needs to be interjected before and/or after a method invocation. However, modifying the code to add those extra calls might not be feasible; whether it�s because the code in question is a third party library, whose source is not available, or if the code needs to be invoked for all methods in a given class. One example would be adding timing logic to each method call so you can monitor the execution time of a method. Modifying all the methods to add that logic before and after the method is time consuming and will clutter your code with redundant code. This would be an instance where the use of a proxy would greatly speed up the process, therefore decoupling the timing code from the business logic. A proxy class would intercept all incoming method invocations, allowing new code to be interjected before the method invocation and after.
This article will briefly outline how to use the C# Emit feature to dynamically create proxy classes. It will outline the use of the dynamic proxy with an example that will illustrate a security filter. This filter will inspect incoming method invocations and determine if the method is accessible to a given role. If accessible, the method will be invoked. Otherwise an error is thrown. This is performed in a dynamic proxy to relieve the burden of having to implement the security check in every method. This allows the check to be localized in one location for better code reuse.
The provided source code includes all the code examples provided here as well as the complete dynamic proxy generation source.
Creating a dynamic proxy involves creating a class that traps incoming method invocations. There is no built in C# mechanism to do this, so this must be done dynamically at runtime. In order to accomplish this, we first define an interface
IProxyInvocationHandler. With this interface, it is up to the user to define a proxy handler class that implements this interface. It does not matter what the definition of the class looks like, as long as the contract provided by the interface is fulfilled.
public interface IProxyInvocationHandler { Object Invoke( Object proxy, MethodInfo method, Object[] parameters ); }
Listing 1 �
IProxyInvocationHandler Interface
The definition of the
IProxyInvocationHandler is completely up to the user. Listing 2 shows an example of a proxy handler�s Invoke method that performs a security check.
Public Object Invoke(Object proxy, System.Reflection.MethodInfo method, Object[] parameters) { Object retVal = null; // if the user has permission to invoke the method, the method // is invoked, otherwise an exception is thrown indicating they // do not have permission if ( SecurityManager.IsMethodInRole( userRole, method.Name ) ) { // The actual method is invoked retVal = method.Invoke( obj, parameters ); } else { throw new IllegalSecurityException( "Invalid permission to invoke " + method.Name ); } return retVal; }
Listing 2 � Invoke Declaration
The dynamic proxy that will be generated works by implementing all the interfaces of a given type. The dynamic proxy will also maintain a reference to the invocation handler that the user defined. For every method declared in the type�s interface(s), a simple implementation is generated that makes a call to the proxy handler�s Invoke method. The method implementation has the same method signature as that defined in the interface.
A
MethodInfo associated with the type, and the method�s parameters are passed in to the invocation handler. Here is where it gets a little tricky. The
MethodInfo instance we pass to the Invoke method has to be the
MethodInfo instance of the class that is going to be proxied. In order to accomplish that, we need access to that
MethodInfo object without having access to the class we are going to proxy. Remember, we only pass in an instance of the invocation handler to this dynamically generated class, not the actual class instance that we are going to proxy. We get around this by creating a utility class that we can use to get the
MethodInfo of a type by providing a unique name of the type and an index to indicate which
MethodInfo we are interested in. A call to
Type.GetMethods() returns a
MethodInfo array irregardless of the number of times its called, it becomes safe to assume that by hard coding the index of the method we want to invoke inside the dynamic proxy, we will always get the same
MethodInfo when the method is invoked. Listing 3 illustrates an example of what a method body in the dynamically created proxy class would look like.
public void TestMethod( object a, object b ) { if ( handler == null ) { return; } // Param 1: Instance of this dynamic proxy class // Param 2: TypeName is a unique key used to identify // the Type that is cached // Param 3: 0 is the index of the method to retrieve from // the method factory. // Param 4: Method parameters handler.invoke( this, MethodFactory.GetMethod( TypeName, 0 ), new object[] { a, b } ); }
Listing 3 � Generated Dynamic Proxy
The call to
MethodFactory.GetMethod takes the name of the object in order to lookup the Type of that object. Also passed in is the index of the
MethodInfo object we want. When we generated this class dynamically, we iterated through the list of methods that this object declares; therefore we knew what index to the method is in the array.
All of this is accomplished using the C# Emit feature. This powerful feature allows Types to be created at runtime by writing out IL (intermittent language) code.
Diving into the intricate details of Emit is beyond the scope of this article and will not be covered in any great detail. What takes place is a new assembly and module is created. With an assembly and module created, a
TypeBuilder can be constructed that represents a new Type. Various attributes can be defined such as the class scope, accessibility, etc. Once the type is created, fields, constructors, and methods can be constructed. For every method declared in the interface and any parent interfaces, a method is created similar to that outlined in Figure 3. The only differences between the methods are the number of arguments to be handled. Once the class and all methods have been defined, a new instance is created and returned to the caller. The caller can then cast the object to any of the interfaces passed in. The type that was just created is cached to improve on performance in case a new proxy instance of that type is needed.
Any subsequent call to a method of the generated class will then be calling the proxy method, which will in turn make a call to the Invoke method on the proxy handler. The user defined proxy handler can then perform any operation. Since the
MethodInfo object is passed in to the proxy handler, the actual method can be invoked.
In order to create an object that is proxied, a class needs to be defined that has a corresponding interface with of all the methods of interest. This is needed because the interface is what defines the contract that is the basis for creating the dynamic proxy. Listing 4 illustrates what a class to be proxied would look like.
public interface ITest { void TestFunctionOne(); Object TestFunctionTwo( Object a, Object b ); } public class TestImpl : ITest { public void TestFunctionOne() { Console.WriteLine( "In TestImpl.TestFunctionOne()" ); } public Object TestFunctionTwo( Object a, Object b ) { Console.WriteLine( "In TestImpl.TestFunctionTwo( Object a, Object b )" ); return null; } } public class TestBed { static void Main( string[] args ) { ITest test = (ITest)SecurityProxy.NewInstance( new TestImpl() ); test.TestFunctionOne(); test.TestFunctionTwo( new Object(), new Object() ); } }
Listing 4 � Creating a proxied object
The
TestBed class shows how to create an instance of the proxied
TestImpl class. The call to
NewInstance takes an instance of
TestImpl, which implements
ITest. That instance is what will be proxied. The return value is a dynamic proxy object that itself implements
ITest. In our example, invoking any method on that instance will cause the
SecurityProxy.Invoke method to be called.
Using a dynamic proxy currently has one limitation. In order to proxy an object, the object must have one or more interfaces that it implements. The reason being, we do not know what methods to proxy if an object with no interfaces is passed in. Sure we can go through all the methods that the instance has defined, but then methods like
ToString,
GetHashCode, etc, risk being proxied. Having a well defined contract using an interface allows the dynamic proxy to only proxy those methods outlined in the contract.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/dynamicproxy.aspx | crawl-002 | refinedweb | 1,457 | 54.22 |
"Paul F. Kunz" <Paul_Kunz at SLAC.Stanford.EDU> writes: >>>>>> On Sat, 07 Dec 2002 15:02:53 -0500, David Abrahams <dave at boost-consulting.com> said: > >> "P (?) > > Yes. My C++ objects are wrap by Boost.Python > >>>. > > I don't see why SIP needs to know my objects are wrapped by > Boost.Python. But let me show you some Python code > > a = QApplication(sys.argv) > clock = DigitalClock() > clock.resize(170,80) > a.setMainWidget(clock) > clock.show() > # the above are Python objects wrapped with SIP > # below are my objects wrapped with Boost.Pythoin > wc = WindowController() > cw = wc.newCanvas() > > a.exec_loop() Oh, I see that there's no interaction on the C++ side. In that case, I agree with you. There's no reason Boost.Python and SIP should have to interact in any way. > >> 1. Get Boost.Python to wrap your objects in a way that SIP already >> undertstands > >> 2. Get SIP to understand the way Boost.Python already wraps objects > > Why should SIP need to know about my objects or vica versa? I assumed that because you cited a problem, there must be one, and the only kind of issue I can imagine arises when the C++ code wrapped with SIP needs to use a type which you've wrapped with Boost.Python, or vice-versa. > I've trace the point of a problem a bit further. Steping down the > call stack, I see... > > #6 0x0807705e in eval_frame (f=0x811a974) at Python/ceval.c:1784 > (gdb) > #5 0x08056794 in PyObject_GetAttr (v=0x82cc4cc, name=0x815a8e0) > at Objects/object.c:1108 > (gdb) > #4 0x401afa43 in instanceGetAttr () > from /usr/local/lib/python2.2/site-packages/libsip.so > > Note the function is in libsip.so. The line in PyObject_GetAttr() > that got us there appears to be... > > if (tp->tp_getattro != NULL) > return (*tp->tp_getattro)(v, name); > > Perhaps SIP depends on overriding something in Python in order to > interact with it's extension modules. I don't know, but this certainly appears to be a SIP problem. I don't think this has anything to do with Boost.Python. -- David Abrahams dave at boost-consulting.com * Boost support, enhancements, training, and commercial distribution | https://mail.python.org/pipermail/cplusplus-sig/2002-December/002555.html | CC-MAIN-2016-36 | refinedweb | 356 | 69.79 |
disk_lru_cache 0.0.2
disk_lru_cache #
Disk lru cache for flutter. wiki
A cache that uses a bounded amount of space on a filesystem. Each cache entry has a string key and a fixed number of files, witch is accessible as stream.
Use cases #
Working with memery);
Working with file system); }
Manage the cache #
Get the bytes of the cache in file system
DiskLruCache cache = ...; print(cache.size)
Clean the cache
DiskLruCache cache = ...; cache.clean();
[0.0.2] #
- Use synchronized library.
[0.0.1] - Basic usage. #
- LRU Map
- Store One
CacheEntrywith multiple files by using
CacheEditor
- Get
CacheSnapshotand open streams directly from
CacheEntry
- Using a key to remove from cache
- A record file to store all operation info,including 'DIRTY','CLEAN','REMOVE','READ'
- Get total size from cache
- When cache.size > caches.maxSize , auto trim to size.
import 'package:flutter/material. ); } }
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: disk_lru_cache: :disk_lru_cache/disk_lru_cache [disk_lru_cache]
Package not compatible with runtime flutter-web of web
Because of the import of dart:io via the import chain package:disk_lru_cache/disk_lru_cache.dart->package:disk_lru_cache/_src/ioutil.dart->dart:io
Health suggestions
Fix
lib/_src/disk_lru_cache.dart. (-2.48 points)
Analysis of
lib/_src/disk_lru_cache.dart reported 5 hints:
line 5 col 8: Unused import: 'dart:ui'.
line 9 col 8: Unused import: 'package:flutter/foundation.dart'.
line 10 col 8: Unused import: 'package:flutter/material.dart'.
line 54 col 8: The value of the field '_hasRecordError' isn't used.
line 66 col 8: The value of the field '_mostRecentTrimFailed' isn't. | https://pub.dev/packages/disk_lru_cache | CC-MAIN-2020-29 | refinedweb | 267 | 51.34 |
From: Jean-Louis Leroy (jll_at_[hidden])
Date: 2005-08-20 04:13:01
I see two "bind" adapters in Boost 1.32: one from bind and one from
lambda. Although they seem to have the same interface, each seems to
have its own implementation.
In effect, "using namespace boost; using namespace boost::lambda;"
doesn't work. What's the story here? Are they going to merge someday?
Is one a full superset of the other, or better implemented (and I
should stick to that one)? Do you have suggestions about handling
this?
TIA...
-- Jean-Louis Leroy Sound Object Logic
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net | https://lists.boost.org/boost-users/2005/08/13407.php | CC-MAIN-2021-49 | refinedweb | 126 | 80.07 |
Finally, a month where all the equity-fund categories were in the black. Stocks shot up during the first week of March on surprisingly strong economic news. But results tailed off from there as investors realized that corporate profits would lag behind the economic recovery.
Nonetheless, equity-fund categories posted good results across the board at month's end. For example, technology and precious-metals funds, which usually move in opposite directions, returned average increases of 9% and 10%, respectively, during March. Top-performing funds were high-risk names like Frontier Equity (FEFPX) and Jacob Internet (JAMFX), both of which gained more than 20%. Many technology funds scored 15% or better in the month (see BW Online's Interactive Mutual Fund Scoreboard).
Overall, with a 5.2% return in March, U.S. diversified stock funds handily beat the S&P 500-stock index' 3.7% showing. International funds did a notch better with a 5.3% return in the month. The worst-performing category was hybrid funds (which combine stocks and bonds in one fund), up only 2% in March.
GLITTERING GOLD FUNDS. The month's strong showing helped round out the quarter's results. For 2002's first three-month period, the S&P 500 and U.S. diversified equity funds were basically flat. International equity fared a bit better, up 3.3%
Standouts in the quarter were precious-metals funds, up a stunning 37% on average. Some gold funds scored gains of 50% or better. First Eagle SoGen Gold (SGGDX) is the only top-performing gold fund that earns an A-rating within its category from BusinessWeek. Investors in diversified emerging markets funds also couldn't complain, with a quarterly gain of 12% on average.
Of diversified U.S. equity funds, small-cap value fared the best in the quarter, with a 7.5% return (large-cap value and mid-cap value both gained around 2%). Real estate, Latin America, and Asia funds also turned in respectable returns of 7% or so on average for the first quarter.
BOTTOM-HUGGERS. The fund world's laggards so far this year are mainly communications, down 15% on average. They were followed by health and technology, which were down half as much in the quarter. Clustered at the bottom on the quarterly rankings are funds like Fidelity Select Wireless (FWRLX), Van Wagoner Emerging Growth (VWEGX), and Invesco Telecommunications (ISWCX), all down about 26% this year through the end of March. Large-cap, mid-cap, and small-cap growth fund categories posted negative returns in the quarter of about 2%.
On the fixed-income side, returns were poor in March as investors worried that the economic recovery would lead to higher interest rates, which would hurt bond prices. Long- and intermediate-term bond funds fell about 2% on average, while short- and ultrashort-term bond funds were roughly flat in March. In a good month for equities, the best-performing bond funds were the riskiest -- convertible and high-yield categories gained 3.5% and 1.9%, respectively.
For the quarter, bond funds, on average, did about the same as stock funds: Returns were essentially flat. The quarter's only standout bond-fund category was emerging markets, up 6.8%. Phoenix-Goodwin Emerging Markets Bond (PEMBX), up 9.6% for the quarter, and Scudder Emerging Markets Income (SCEMX), up 9.3%, were the top first-quarter fixed-income performers.
By Amey Stone in New York
Upgraded to BusinessWeek's A list in March:
AIM Global Health Care A
AIM Mid Cap Equity A
AIM Small Cap Growth A
Baron Growth
Bjerman Micro Cap Growth
Liberty Select Value A
MFS New Discovery A
MS Hlth Sciences B
MS Instl Mid Cap Value Instl
Muhlenkamp
Munder Micro Cap Equity A
North Track PSE Tech 100 Idx A
One Group Mid Cap Growth I
Reserve Private Eqty Small Cap Growth R
VM West Coast Equity A
Wachovia Special Values A
Wasatch Ultra Growth
Wells Fargo Small Cap Opportunities I
Downgraded from BusinessWeek's A-list in March:
Fidelity Balanced
Fidelity New Millennium
Fidelity Select Health Care
Fidelity Select Transportation
Heritage Capital Apprec A
Leuthold Core Investment
Longleaf Partners Small Cap
Merrill Lynch Focus Value B
Mutual Beacon Z
Mutual Discovery Z
Purisima Total Return
US Global Leaders Growth
Value Line Special Situations
Vontobel US Value
Wells Fargo Strategic Income I
WM Growth Fund of the Northwest A By Amey Stone in New York | http://www.bloomberg.com/bw/stories/2002-04-09/march-funds-in-like-a-lion-dot-dot-dot | CC-MAIN-2015-11 | refinedweb | 737 | 54.12 |
On Fri, 21 Feb 1997, Dave Kinchlea wrote: >. I have PAM 0.50 and I also looked at /modules/pam_unix/pam_unix_passwd.c, but i didn't notice any flocking (or calling fcntl) in it. Here is this file ... #include <stdlib.h> #include <stdio.h> #include <syslog.h> #ifndef LINUX /* AGM added this as of 0.2 */ #include <security/pam_appl.h> #endif /* ditto */ #include <security/pam_modules.h> static char rcsid[] = "$Id: pam_unix_passwd.c,v 1.1 1996/03/09 09:10:57 morgan Exp $ (C) Alexander O. Yuriev"; int _pam_unix_chauthtok( int flags, int argc, const char **argv ); int _pam_unix_chauthtok( int glags, int argc, const char **argv ) { syslog( LOG_DEBUG, "_pam_unix_chauthtok() is not implemented. Returning PAM_SUCCESS "); return PAM_SUCCESS; } int pam_sm_chauthtok( pam_handle_t *pamh, int flags, int argc, const char **argv) { return ( _pam_unix_chauthtok( flags, argc, argv ) ); } __________________________ Can you tell me what does pam exactly do to lock /etc/passwd ? Thank yopu for help :) Sasha | https://listman.redhat.com/archives/pam-list/1997-February/msg00126.html | CC-MAIN-2021-43 | refinedweb | 150 | 62.75 |
React Native Tab View Example Tutorial
React Native Tab View Example Tutorial is today’s leading topic. Recently I needed tabbed navigation for a React Native app I’m working on. So in this example, I will show you how you can create a tab navigation system in React Native. First, we register the different screens using Navigation object and then define this function startTabBasedApp(params) which contains an object as an argument. We have seen the startSingleScreenApp(params) in the previous example; now we will navigate through multiple screens using Tab Navigation. Now, start this example by installing, the React Native.
Content Overview
- 1 React Native Tab View Example Tutorial
- 2 #1: Setup React Native For iOS device.
- 3 #2: Configure both the libraries inside XCode.
- 4 #3: Create screens.
- 5 #4: Create a HomeScreen.
- 6 #5: Register all three screens.
- 7 #6: Add React Native Vector Icons.
React Native Tab View Example Tutorial
Okay, now we install the React Native using the React Native CLI. If you have not installed previously then install it globally using the following command.
#1: Setup React Native For iOS device.
Type the following command.
npm install -g react-native-cli
Create a new project using the following command.
react-native init rncreate
Now go inside that project folder.
cd rncreate
Type the following command to install the libraries.
yarn add react-native-navigation@latest react-native-vector-icons
So, we have used the react-native-navigation and react-native-vector-icons.
Now, we need to configure this library. Remember, we will test our project on iOS simulators and not Android simulators. So this demo is oriented explicitly for the iOS development.
#2: Configure both the libraries inside XCode.
Okay, now we need to open the project in the XCode. So open the Xcode and open the folder rncreate >> ios inside XCode. Now, we need to add the libraries from node_modules. Perform the following steps for react-native-navigation configuration.).
#Configure react-native-vector icons.
- Open the iOS folder inside Xcode, and you will find the directory structure like this.
Here, you can see one folder called Libraries. Right click on that folder and click on the Add Files to rncreate.
It will open up the file browser. Now browse to the project root’s node_modules folder and navigate to the react-native-vector-icons and inside that Add the RNVectorIcons.xcodeproj.
2) The second step is inside Xcode, click on the root folder, you will get something like below.
Here, I have already clicked on the Build Phases, but you need to click the Build Phases tab, and here also we need to add the files. Search for the file called libRNVectoricons.a and add the file and you are done.
#3: Create screens.
Now, you can close the XCode and open the project in your favorite editor. Inside the root folder, create one folder called screens and inside that create two files named following.
- InboxScreen.js
- OutboxScreen.js
Write the following code inside both of the files. Remember, when we navigate between different tabs, we will see these screens.
// InboxScreen.js import React, { Component } from 'react'; import { View, Text } from 'react-native'; export default class InboxScreen extends Component { render() { return ( <View> <Text> This is Inbox(4) </Text> </View> ) } }
Also, write the following code inside OutboxScreen.js file.
// OutboxScreen.js import React, { Component } from 'react'; import { View, Text } from 'react-native'; export default class OutboxScreen extends Component { render() { return ( <View> <Text> This is sent messages(10) </Text> </View> ) } }
In the same folder means inside screens folder, we also need to create one more file, which will be our main tab file. Let us call that file startMainTab.js and add the following code inside it.
// need to define all the screens, which has a tab layout. So we have created two screens till now. So we have passed the two items as an argument.
Write the following code inside the startMainTab.js file.
// have destructured the Navigation object from react-native-navigation. Also called the startTabBasedApp() function on that object. It takes an object which consists of an array of different screens.
#4: Create a HomeScreen.
Now, inside the screens folder, create one file called HomeScreen.js. Here we need to import that startMainTab.js file.
// HomeScreen.js import React, { Component } from 'react'; import { View, Text,Button } from 'react-native'; import startMainTab from './startMainTab'; export default class HomeScreen extends Component { onButtonPress = () => { startMainTab(); } render() { return ( <View> <Text>Home Screen</Text> <Button title="Tab Navigation" onPress = { this.onButtonPress } /> </View> ); } }
When the application starts, we will see this screen and also one button titled Tab Navigation.
#5: Register all three screens.
Inside App.js file, we need to replace the code with the following code. App.js file is inside root folder.
// App.js import { Navigation } from 'react-native-navigation'; import HomeScreen from './screens/HomeScreen'; import InboxScreen from './screens/InboxScreen'; import OutboxScreen from './screens/OutboxScreen'; Navigation.registerComponent('rncreate.HomeScreen', () => HomeScreen); Navigation.registerComponent('rncreate.InboxScreen', () => InboxScreen); Navigation.registerComponent('rncreate.OutboxScreen', () => OutboxScreen); Navigation.startSingleScreenApp({ screen: { screen: 'rncreate.HomeScreen', title: 'Home' } });
So, when the application starts, the first screen will be the HomeScreen. If we create 10-20 screens for our application, then we need to register all the screens over here.
Now, we will not use index.js file which comes with React Native by default, instead of that, we will create our index.ios.js file for specifically iOS development. So inside the root, create an index.ios.js file and add the following code inside it.
// index.ios.js import App from './App';
We need to import the App.js file, and that is it. Nothing more. If you have configured the settings of third-party libraries correctly then compile our project using the following command.
react-native run-ios
You will see this first screen.
Now click on the Tab Navigation button, and you will see the following screen.
Here in the bottom, you can see the two tabs.
- Inbox
- Outbox
You can click on it, and it will display the content accordingly. So here are basic React Native Tab View Example Tutorial is over.
#6: Add React Native Vector Icons.
Inside screens >> startMainTab.js file, we need to add the ionicons. So our final file looks like this.
// startMainTab.js import { Navigation } from 'react-native-navigation'; import Icon from 'react-native-vector-icons/Ionicons'; const startTabs = () => { Promise.all([ Icon.getImageSource("md-map", 30), Icon.getImageSource("ios-share-alt", 30) ]).then(sources => { Navigation.startTabBasedApp({ tabs: [ { screen: 'rncreate.InboxScreen', label: 'Inbox', title: 'Inbox', icon: sources[0] }, { screen: 'rncreate.OutboxScreen', label: 'Outbox', title: 'Outbox', icon: sources[1] } ] }); }); } export default startTabs;
Finally, we have added icons to our application, and our React Native Tab View Example is over.
Can u Explain why it doesnt work for android?
which emulator do u use for reactnative? | https://appdividend.com/2018/08/08/react-native-tab-view-example-tutorial/ | CC-MAIN-2019-18 | refinedweb | 1,137 | 52.15 |
Ok I know one way of using the formula n*(n+1)/2) and have used it to calculate the sum already but I just wanted to know how I can access many numbers in a loop and add them together. Here is what I did so far.
Code:#include <iostream> using namespace std; int main (){ int num, num1, sum; cout<<"Please enter a number\n"; cin>>num; cout <<"Please enter the number to which you want to count\n"; cin>>num1; for (; num<=num1; num++) cout<<num<< ","; //I do get the numbers but from here //I want to know what I can do so that I can add each // of these numbers together? do (sum=num+num);//I want to add them like 0+1+2+... //tried using loops but couldn't get it while (num<=sum); cout<<sum;//but this does not work so any help is appreciated return 0; } | https://cboard.cprogramming.com/cplusplus-programming/115028-sum.html | CC-MAIN-2017-22 | refinedweb | 152 | 70.06 |
This is the third installment on a series of post regarding software programming on PIC18F cpu family. You can find the first here and the second here.
Linker
The linker is expected to group together the code from all the different compilation units produced by the compiler and generate the binary code. Since Pic18f architecture is what it is, this is not a trivial task.
Compiler groups data in data sections. These sections may be assigned to a specific region of the data memory via the linker script.
This file is defined for every project and you’d better to familiarize with its syntax.
In fact some troubles may arise by bad linker script and can be fixed by changing the linker script.
For example the compiler uses a temporary data section (named .tmpdata) – to store intermediate expression results – that is free to float around in the data memory. If the linker script is modified without care, then this section may fall across the banks boundary causing wrong computation (in the best case, memory corruption in the worst).
The default project comes with a default linker script that avoids data object to get across bank boundaries. (Note that linker script banks are not data memory banks, but user defined regions of memory, you may want to make linker script banks coincide with data memory banks for avoiding bank switching problems). So, by default, you are protected from this kind of fault (at the cost of some slack space, unless your code is so lucky to fill perfectly all the pages). But when the project size increases your data object will grow as well. So you may be tempted (I was) to merge all the banks into a big one.
I did, then I found many unexpected troubles because of this (see the .tmpdata and C startup problems for example). So I wrote a small awk script to run over the map file to spot these problems:
From the results I selected those modules that have large data object. I found three large data objects of 360, 600 and 501 bytes respectively. So I modified the linker script to have 3 multi-page banks – 2 banks composed by 2 pages and 1 spanning over 3.
In this way the linker is forced to put big objects in those multi-pages banks, but it will keep all the other objects within a single bank as required.
The best option you have is to start with a default linker script and then merge together adjacent banks as soon as you discover a large data object (this will be reported by an obscure linker error message pointing to a completely innocent module).
The Linker is also very uninformative about errors, you are allowed only to know that you ran out of memory. To be more precise you are allowed to know it only after a training, because the error message is very obscure, something on the lines of “section <a section name you are unaware of> cannot fit some other section”.
Assembler
Since Pic 18 are basically C-unfriendly, some assembly may be required. If you need a little bit of assembly then you can write it directly in the C source code (at a price we’ll see later). If you need some more assembly you want to the separate assembler. In this case you can take full advantage of specific assembly directives and/or macros, but then you lose integration with C language. In fact the assembler cannot fully understand C preprocessor directives, making it impossible to use the same header file for inclusion in both C and assembly.
There are two ways to workaround this, both not very satisfying. First you can write shared header files with the common subset of preprocessor directives shared both by assembly and C. Just keep in mind that rules for searching header file differs.
The other way is to write a filter (more or less complex according to the complexity of your header files) for converting C headers into assembly includes.
I went the last way because it seemed simpler, just convert C comments into assembly language comments, then I modified the filter to resolve include files. I gave up when I tried to switch from #if defined(X) to the old #ifdef X supported by assembler.
Eventually I decided to opt for very basic header files included both from assembly and integrated in more convoluted header file structure for C. I resorted to this solution only because it would take too much time to write a complete filter. If you do this just keep in mind that although comments are not compatible, you can use #if 0/#endif to bracket away parts from both the assembly and the C part.
When you mix assembly and C in the same source file you may get surprising results. As I wrote before I had defined an assert macro to execute a Sleep instruction in order to halt the debugger. My assert was something like:
The effect is that this inserts the assembly fragment with the Sleep instruction everywhere you want to assert something. I was running short on program memory so I tried several configuration on debugging and optimization options and I discovered a significant difference in memory usage whether asserts where implemented with the assembly fragment or via a function call.
Apparently the optimizer has a hard time in doing its work when an assembly code block is inserted in a C function, no matter what the content of the block is (the sleep instruction has no side effects that can disturb the C code execution).
I think the assert is one of the rare case where you want assembly language not for performance reason. Therefore it is a sort of contradiction – you use assembly language fragment to improve speed, but you kill the C language optimization.
If you need assembly for performance, put it in a specific .asm source file.
Next time I’ll write about the IDE and debugging. | http://www.maxpagani.org/2011/05/31/pic18f-software-project-survival-guide-3/ | CC-MAIN-2018-09 | refinedweb | 1,006 | 59.33 |
For those of you who didn’t already know, several of our CodeRush plugins are already available from the Visual Studio Gallery.
Plugins added to the Visual Studio Gallery can be installed from within Studio via the Tools\Extensions and Updates menu.
Additionally Studio will let you know if a new version is released. Assuming you agree, it will download the new version and install it for you as well.
So we’ve just added another 5 plugins to those already up there making it even easier to pimp your IDE.
For reference these are the plugins we’ve just added:
To browse a full list of CodeRush plugins on Visual Studio Gallery just visit Tools\Extensions and Updates in Studio and search the online section for CodeRush
Alternatively, just follow this link.
Note: The template referred to by this post does not ship with CodeRush. To install it, see instructions towards the end of the post.
What is a Metric
A metric is a measurement of some quality of your code. In this case it is a per-method measurement. CodeRush displays the metric for each member, on left of the member in question.
CodeRush ships with 3 such metrics:
You can choose which metric CodeRush displays, by clicking the current value and choosing another metric from the list.
The NewMetric Template
As you’ll have guessed based on previous posts in this series, the purpose of the NewMetric template is the quickest way to create your own metric?
Usage
As with other plugin templates, this one is intended to be expanded (NewMetric<space>) in the body of your plugin class.
It will produce code that looks like this:
As with other plugin templates, you’ll have to give your Metric 3 names.
As a quick reminder, these are:
Next up, you’ll have to call the registerXXXX method (just generated and named) from within your plugin’s InitializePlugin method.
Finally you need to implement your plugin’s new metric. This is achieved by adding code to the GetMetricValue handler (also stubbed out by the template).
Typically you’ll examine the source of your project and perform some sort of calculation based on the LanguageElement passed to you in e.LanguageElement. Then you’ll assign the result of this calculation to e.Value.
So if you were looking to replicate the lines of code metric, you might use code similar to:
..and that’s it.
In Summary…
…and you’re done.
Where do I get this template?
As usual this template is available from the CodeRush Plugin Templates repository on GitHub
If you have any suggestions to improve any of the templates in this series, feel free to contact me via email or on twitter.
If you so inclined you could even fork the project, make some changes and send me a pull request.
Note: The templates referred to by this post do not ship with CodeRush. To install them, see instructions towards the end of the post.
This post describes two more templates in my Plugin Templates series. The series itself is introduced in my previous post - So you want to write a CodeRush plugin
Purpose
The NewRefactoring and NewCodeProvider templates are the single quickest way to get started creating Refactoring and CodeProvider(Collectively called ContentProviders) plugins
These types of plugins add items to the Red (Refactoring) and Blue(Code) sections of the CodeRush SmartTag menu.
The templates work very similarly to My previous NewAction template in terms of placement. ie You should expand them in the body of your plugin class and then call the generated register methods from within your plugin’s the InitializePlugin method
This is how the NewRefactoring template expands:
…And here is what you get when you expand the NewCodeProvider template
Note: As before, you’ll need to call the appropriate registration method from your InitializePlugin method
You can see they are structurally very similar. they both create Register, CheckAvailability and Apply methods. In each case there is a small amount of customization to be done in order to get your ContentProviders up and running.
You’ll have to give your ContentProvider 3 names.
Next you’ll have to provide the code for:
CheckAvailability
Availability is about determining if it is currently appropriate for your ContentProvider to be present on the CodeRush SmartTag menu.
Your CheckAvailability method is where you do this. In the unlikely event that your Refactoring\CodeProvider is always supposed to be available, simply leave the code as it is.
In most cases, your ContentProvider will only make sense under certain circumstances. If these circumstances are not met, then it makes more sense to simple not show your ContentProvider in the SmartTag menu.
Consider the “Split String” refactoring. This refactoring is designed to allow you to change
…Into…
It’s clear, there is no point in providing this Refactoring, unless the user’s caret is within a string.
Thus the SplitString_CheckAvailability method might look something like this:
See how it only sets ea.Available to true if the ActiveString is not null. ie The caret is on a string literal.
Of course you should feel free to make your determination in any way you see fit.
You could:
If you really want to, you could call out to the web, and check a webcam to see if there’s a blue moon
Just keep in mind that the check you perform here, will be performed quite often as the user moves around the code. This code needs to be as fast as possible.
Execute
As you might imagine, the ContentProvider_Execute method is where you perform the main function of your ContentProvider.
You might:
The sky really is the limit.
Ok Cool, so how do I get these templates?
The NewRefactoring and NewCodeProvider templates are available in the CodeRush Plugin Templates repository on GitHub
You can import them the traditional way
OR you can use the Template Importer and point it at
If you have any suggestions to improve any of the templates in this series, feel free to contact me via email or on twitter.
I was rereading my previous post New CodeRush Plugin Template – NewAction
In it I asked you to ….
However it occurs to me that this is rather lengthy and awkward.
What if we could automate that a little more? What if we could write a plugin which was able to download templates and install them itself? What if….
Aw nuts! Never mind ‘What if’……….Let’s just do it!
CR_TemplateImporter
This is Template Importer.
Note: It’s pretty clear that I’m no UI expert, but that’s another good reason to open source and accept pull requests.
In brief:
It can do 2 amazing things.
More Detail
Import Templates (from the Web)
A exported set of templates (see Exporting and Importing CodeRush Templates) is just an XML file. They’re easy to create, easy to store and easy to publish for others to use. All that’s really needed is an easy way to strip them off the web, and import them into your system.
The top half of the screen does exactly this. You can specify the url of a raw template file, and then click the Import button. (For your convenience, the little “>” button on the left will copy the example into the live box)
The example file is published on github, but there’s no limit to where you can put these files.
Anywhere they can be reached by http is good enough.
I’m planning on uploading a lot more of my templates to various appropriate github locations.
But wait …. It gets better!
Import Packages (from the Web)
Q: Hang on… What is a package?
A: A package is a simple xml file (Yes I like XML files. Sorry.) which contains the urls of one or more exported template files.
For example here is the contents of the example package that is listed within the TemplateImporter by default.
The xml is fairly simple.
The template node has a url attribute (pointing the way to a template file) and a language attribute (specifying the language of the templates within the file).
Of course you can add as many ‘template’ nodes as you want, which means that you can create single package file that lists all your favourite 3rd party templates and then import them onto any machine you like in one hit. Great for being able to pull your standard templates onto your co-worker’s machine.
Limitations
The Current Template Importer is limited in a few ways.
Guidelines
There are no real safeguards in place yet. Be careful.
So where can I get it?
As usual, the source code and binary for this plugin are available on github
Like many others, this plugin is listed on the CodeRush Plugins Site.
You can either:
OR
In future when I provide one or more templates, you should be able to use this plugin to point directly at the file’s url and suck the templates straight off the web without all that tedious mucking about with Right click … Save As etc
Note: The template referred to by this post does not ship with CodeRush. To install it, see instructions towards the end of the post
This post describes the first new plugin template in a series. The series itself is introduced in my previous post - So you want to write a CodeRush plugin
The NewAction template provides the single quickest way to create a new action for your plugin. Actions are named bits of code which an user can trigger via a shortcut (mouse, keyboard or menu).
Each new action creates an entry which will be placed in the commands dropdown on the shortcuts page.
First you’ll need a plugin project. An existing project is fine, or you can create a new one via:
Once you are in your new project, you’ll see code like this:
As indicated in the screenshot, there you should:
In more detail the steps are:
All of these steps involve coding. None of them take you through designers, and there’s no searching for the right properties or events.
The Action does, of course, have more properties and events to investigate, but this template provides the basic setup which almost every action will require.
Download and Install
This template is available in the CodeRush Plugin Templates repository on Github
To install this template you need (the raw version of) CS_NewAction.xml from the repository.
Update: If you’re using the Template Importer plugin, you can pass it this url to import the NewAction template straight off the web.
The Motivation
I found this question on Stackoverflow
Essentially it breaks down to…
How can I refactor this….
…into this…
… in a single step.
The Plugin
This plugin allows you to extract all arguments of a method to new locals which are then placed above the method call.
And the Result
Where can I get this wonderful plugin?
I’ve recently been asked:
How come you’re putting all your plugins on GitHub? Have you abandoned the DXCoreCommunity site?
This post is an attempt to answer this question.
The Problem
The tech and structure underlying the existing DXCore Community Site is old and out of date. It’s an SVN repository hosted with Google Code. It’s a single repository that holds many plugins. The site is larger than I ever though it would be. We have over 100 plugins and their associated wiki pages and it’s “getting” out of hand.
Within the current structure, branching and merging is so much of a pain that I don’t think it’s been done in a very long time. Its actually an inertial barrier to making any changes at all.
With hind-sight, it should probably never have been created as a single repository of plugins. Rather it should have been an index to many repositories, each holding their own plugin project. However what’s done is done and porting 100+ projects out of a single SVN repository and into 100+ individual repositories (along with their various wiki pages and links between them) is a task that seems a little impractical to take on wholesale.
The other major pain point of the SVN based site, is it’s centralized nature. A single central repository means that it has a single central access management system. ie Anyone who needs write access, needs to be given explicit write access. In addition there is no way to give access to only those parts of the project that make the most sense. Write access is site wide, or not at all. As the breadth of a project grows, so does this issue.
First Steps
The first step in solving any problem is to recognise it’s existence. In this case, I acknowledge: the community site is too big and unwieldy to manage properly, and it has been for some time.
The second step is to stop making it worse. If I can help it, I will not be adding any additional plugins to that site.
So how to move forward?
The Code
So far as the code is concerned, I’ll be doing what you have seen recently. Creating each new plugin within it’s own repository. Each repo will contain code, readme and releases (including binaries).
I will be using Git, because I believe I find it quicker, easier and more flexible than any other source control I have used before. The use of Git, along with isolating each plugin in its own repo, will make branching and merging much easier. This in turn will allow easier experimentation, and more innovation.
This means that there, will be many more repositories than before. This was always the case anyway, but the community site ironically didn’t do much to acknowledge the community outside it’s own walls. People already create plugins which are not a part of the site. They blog about them independently and even host them in different types of source control.
So in acknowledging that there are plugins beyond the community site, we justify the continuing need (perhaps a bigger need than before) for a site whose purpose is to provide a location for people to learn about the existence of such plugins.
The Community Site
This rise in plugin locations increases the need for somewhere to index them. Whilst I could repurpose the existing Community site, it’s worth noting that the single central access management system is as much of an issue with the wiki side of things as it is with the source code side.
The Community Site, or rather the tech that supports it, is not up to the task of providing a central location to locate useful plugins.
So we need a new community site which needs to be…
Can you see where I’m going with this?
Yup… I decided I decided to depart the shores of Google Code and set sail for the land of GitHub.
Hosting the site using Git has a lot of advantages:
In addition there are particular advantages to hosting with GitHub
The new community site will be a pure index. It will not be responsible for any plugin code. It will not be responsible for any plugin readme file. It will likely have to contain a basic description of each plugin and might list any given plugin on multiple pages according to it’s categorization. However the plugins will not be hosted on this site and the plugin’s authors will retain complete responsibility for them.
Not hosting the plugin code or binaries gives the new community site an advantage over the old one. Every linked plugin is equal in the eyes of the community site.
The site can link directly to:
…without granting any of them special treatment because of their local status, because none of them will be local.
Licensing
The previous community site used the MIT licence. this was felt to be a lowest common denominator. It was understood that plugins hosted on the site were essentially anything as long as the original author was recognised. However not everyone wanted to use this licensing model. Now they don’t have to. Simply create a repository of your own (github or anywhere else), upload your code and add the license of your choice. Then let us know and we’ll add your plugin to our lists in all the appropriate places.
We’re live!
The first version of the site is already up and running at:
It lists all of my most recent plugins and links back to the original community site for completeness. Go on. Check it out. If you’d like me to add your plugin, send me a link to your repo or blog-post and I’ll add you to the list. If you’d like to help improve the look and feel or content of the site, Fork it! and send me a pull request.
You have a simple idea for a CodeRush plugin.
But you can’t be bothered with all of that drag and drop visual designer business.
Visual Designers are Slow
From the moment you reach for the mouse you know you’ve already lost speed. Visual designers aren’t just slow though. They’re very awkward to use. Even if they responded at the speed of light, they’d still be awkward to use.
Assuming that they invoke instantly, you still have to wave that mouse around all over the place in order to get anything done at all.
For a simple control/component with 2 properties and 2 events the procedure is something like this.
That’s 8 mouse operations and 4 keyboard operations. It’s also 8 context changes (Mouse –> Keyboard or Keyboard –> Mouse)
So what’s the alternative?
Enter… CodeRush
Well what is the toolbox ultimately used for? That’s right, it’s an easy way to write code quickly.
Ok well CodeRush already has many ways to write code quickly. One of the most effective of these, is also one of the easiest to use.
Templates
Type a few characters…. Hit space… Oh look there’s your code. Awesome!
So back to the main point of this post: I’ve been putting together a group of templates that can ease the creation of plugins. I’m collecting these in a Github repository which I’m just going to leave over here.
I’ll go through them in some upcoming posts, but the general gist is this.
I’ve summarized these instructions in this handy screenshot.
The Templates
The templates presented emit code designed to replace the drag and drop designer approach typically used for such things. This is my own preferred approach. I do not suggest it is the only approach. However it is one that works for me :)
Upcoming posts will walk you though several templates including (but not limited to) NewAction, NewRefactoring and NewCodeProvider. So stick around and we’ll help kick-start that plugin for you.
CodeRush User Richard O'Neil has created his own set of FakeItEasy templates to speed his development.
These templates give him an edge when creating his fake objects using this popular framework.
In case you’re unaware, FakeItEasy is a Mocking framework for .Net which is hosted on GitHub
As I write this, Richard has support for the following templates:
Setup FakeItEasy
Trigger: ufie<space> Effect: Adds ‘FakeItEasy’ to the namespace references of the current file.
Note:This template does not need to be called explicitly since it is called for each of the following templates already. Additionally, the reference is added in the correct location above the class definition.
Create a Fake Object
Trigger: fief<space> Expands to:
Set up a call to return a value
Trigger: fiecr<space> Expands to:
Creating ignore parameters
Template: fieany?Type?
fieany?Type?
Examples:
fieanyi<space> Expands to:
fieanyi<space>
fieanys<space> Expands to:
Being the awesome community person he is, Richard has published these templates on github so anyone can take advantage of them.
If you’d like to use these templates in your own projects, download the xml file from the repository and follow the Import instructions in this post
Update: If you’re using the Template Importer plugin, you can pass it this url to import the FakeItEasy templates straight off the web.
CodeRush’s Templates are awesome.
Sadly we can’t ship every conceivable variation of awesome out of the box. However we do ship the ability for you to write your own templates.
I’ve blogged about creating templates before, but thought it might be worth reminding you how to Export your templates for sharing and how to Import templates written by others.
Exporting Templates
From the templates options screen (Ctrl+Shift+Alt+O, Editor\Templates) you should arrange your templates such that:
To perform the actual export…
At this point you have a totally portable .xml file which can be taken to another computer and imported.
Importing Templates
From the templates options screen (Ctrl+Shift+Alt+O, Editor\Templates)
If the template file indicates a folder that you already have then you will be offered the following dialog:
The default choice here will wipe any duplicate templates found in your existing system in favour of those found in the templates file.
In the example above, I am importing some templates into a ‘Custom’ folder so those are the only ones at risk.
If you are unsure, feel free to pick the 2nd option and change the top level folder to something new (perhaps ‘Custom2’). You can then examine the plugins imported into that folder and decide which ones you’d like to keep. You can either move them around manually or decide that they’re all fine and reimport over the top of the original location (remembering to delete the ‘Custom2’ folder if this is the case)
Summary
So you now have the tools to let you export some of your own awesome templates for others to use.
Why not spread the wealth and share your templates with co-workers or even with the wider community.
In some upcoming posts I’ll be doing exactly that. Sharing some of my own custom templates with you to help make your coding lives easier.
Update: Another way to import CodeRush templates, is to pull them straight off the web using the Template Importer. | https://community.devexpress.com/blogs/rorybecker/default.aspx?PageIndex=4 | CC-MAIN-2017-26 | refinedweb | 3,741 | 70.63 |
Spark, from training/testing loop to model, working to score in production
Suppose we trained/tested the model, found it good and we saved the trained and good model on a file system.
All that, using spark and spark ML lib. (python)
1) How do we start to use this model in production to process actual requests and predict? Can we use the same spark cluster(I mean load the model in another spark app and process online requests with this model)?
2) Should we in parallel run the training/testing on the more modern data and once in a while "refresh" the model we use in production? Is that acceptable solution?
3) I'm afraid online/production performance of python might be low, so is there way to speedup the execution in production? I mean trained model, could it be transferred to C or in other way "speed-improved"?
Thanks!
- Convert result of aggregation into 3 separate fields of col name, aggregate function and value
I have a dataframe in the form
+-----+--------+-------+ | id | label | count | +-----+--------+-------+ | id1 | label1 | 5 | | id1 | label1 | 2 | | id2 | label2 | 3 | +-----+--------+-------+
and I would like the resulting output to look like
+-----+--------+----------+----------+-------+ | id | label | col_name | agg_func | value | +-----+--------+----------+----------+-------+ | id1 | label1 | count | avg | 3.5 | | id1 | label1 | count | sum | 7 | | id2 | label2 | count | avg | 3 | | id2 | label2 | count | sum | 3 | +-----+--------+----------+----------+-------+
First, I created a list of aggregate functions using the code below. I then apply these functions into the original dataframe to get the aggregation results in separate columns.
val f = org.apache.spark.sql.functions val aggCols = Seq("col_name") val aggFuncs = Seq("avg", "sum") val aggOp = for (func <- aggFuncs) yield { aggCols.map(x => f.getClass.getMethod(func, x.getClass).invoke(f, x).asInstanceOf[Column]) } val aggOpFlat = aggOp.flatten df.groupBy("id", "label").agg(aggOpFlat.head, aggOpFlat.tail: _*).na.fill(0)
I get to the format
+-----+--------+---------------+----------------+ | id | label | avg(col_name) | sum(col_name) | +-----+--------+---------------+----------------+ | id1 | label1 | 3.5 | 7 | | id2 | label2 | 3 | 3 | +-----+--------+---------------+----------------+
but cannot think of the logic to get to what I want.
- How to select into external database in pyspark
I have been trying to execute this very simple query at azure databricks:
query = '(select * into [mySchema].[myTable] from [mySchema].[otherTable] where 1 = 2) createTableQuery' spark = SparkSession.builder.getOrCreate() df = spark.read \ .format("jdbc") \ .option("dbtable", query) \ .option("user", dwUser) \ .option("password", dwPass) \ .option("url", dwUrl) \ .load()
However, it looks like spark doesn't support select into or stored procedures. When I try to run that, it gives me the exception below:
com.microsoft.sqlserver.jdbc.SQLServerException: Parse error at line: 1, column: 29: Incorrect syntax near '*'.
I actually managed to successfully run the query by following this url content below:
Executing sql server stored procedures on databricks pyspark
Basically I will have to install the normal python packages:
%sh curl | apt-key add - curl > /etc/apt/sources.list.d/mssql-release.list apt-get update ACCEPT_EULA=Y apt-get install msodbcsql17 apt-get -y install unixodbc-dev /databricks/python/bin/pip install pyodbc curl | apt-key add - curl > /etc/apt/sources.list.d/mssql-release.list apt-get update ACCEPT_EULA=Y apt-get install msodbcsql17 apt-get -y install unixodbc-dev sudo apt-get install python3-pip -y pip3 install --upgrade pyodbc
and then I can use python pyodbc:
import pyodbc conn = pyodbc.connect( 'DRIVER={ODBC Driver 17 for SQL Server};' 'SERVER=' 'DATABASE=' 'PWD=') conn.autocommit = True conn.execute(query) conn.close()
That doens't look very attracting or spark way to do things. I want to know if there is a better way to accomplish it.
- How can create a new DataFrame from a list?
Hello guys i have this function that gets the row Values from a DataFrame, converts them into a list and the makes a Dataframe from it.
//Gets the row content from the "content column" val dfList = df.select("content").rdd.map(r => r(0).toString).collect.toList val dataSet = sparkSession.createDataset(dfList) //Makes a new DataFrame sparkSession.read.json(dataSet)
What i need to do to make a list with other column values so i can have another DataFrame with the other columns values
val dfList = df.select("content","collection", "h").rdd.map(r => { println("******ROW********") println(r(0).toString) println(r(1).toString) println(r(2).toString) //These have the row values from the other //columns in the select }).collect.toList
thanks
-.
- Can we find sentences around an entity tagged via NER?
We have a model ready which identifies a custom named entity. The problem is if the whole doc is given then the model does not work as per expecation if only a few sentences are given, it is giving amazing results.
I want to select two sentences before and after a tagged entity.
eg. If a part of the doc has world Colombo(which is tagged as GPE), I need to select two sentences before the tag and 2 sentences after the tag. I tried a couple of approaches but the complexity is too high.
Is there a built-in way in spacy with which we can address this problem?
I am using python and spacy.
I have tried parsing the doc by identifying the index of the tag. But that approach is really slow.
- How to create pmml from sklearn model that can be import to python file with sklearn-pmml-model?
I want to create PMML file from scikit-learn model. The pmml file will be read/import from other python files. But the results give me various errors list of errors: -PMML model ensemble should use majority vote. -Sklearn only supports binary tree models. Now im confuse which one is produce error. Is it when creating pmml file (export) or import pmml file? Or any oher libraries recomendation that fit my problem?
I've tried many library such as sklearn2pmml, nyoka & scikit2pmml to create PMML file but the result is same. For import pmml file, i'm using sklearn-pmml-model.
Create Model:
import pandas iris_df = pandas.read_csv("/smart_apps/iris2.csv") from sklearn.tree import DecisionTreeClassifier from sklearn2pmml.pipeline import PMMLPipeline pipeline = PMMLPipeline([("classifier", DecisionTreeClassifier())]) pipeline.fit(iris_df[iris_df.columns.difference(["species"])], iris_df["species"]) from sklearn2pmml import sklearn2pmml sklearn2pmml(pipeline, "model4.pmml", with_repr = True)
Expected result: -succesfully import pmml file | http://quabr.com/56749982/spark-from-training-testing-loop-to-model-working-to-score-in-production | CC-MAIN-2019-30 | refinedweb | 1,036 | 57.37 |
GUI
GUI How to GUI in Net-beans ... ??
Please visit the following link:
Gui plz help
Gui plz help Create a Java application that would allow a person... be in the range from 1 to 10).
In total, 10 questions should be asked. After 10... far. so basically what i did is i used the java palletes to make a application
urgent help needed in JDBC AND JAVA GUI - JDBC
want any one to help me convert from scanner to java GUI for this code...urgent help needed in JDBC AND JAVA GUI my application allows...();
}
}
// thanks for any help rendered
Hi Friend,
Try the following code
Java GUI Program - Java Beginners
Java GUI Program How is the following program supposed to be coded...://
Thanks.
Amardeep... by Day 7 under a thread in your Team Forum called Week Three Program for 10 points
java gui-with jscroll pane
java gui-with jscroll pane Dear friends..
I have a doubt in my gui application.
I developed 1 application. In this application is 1 Jscrollpane... feilds through my program....
some one please help me....
Thanks in advance
java gui database - Java Beginners
the data on command line but I do not know how to display the same using java gui. Could anybody help me.
Thanks.
Shah...java gui database I have eight files. Each file has exactly same
GUI - Java Beginners
links:
Hope that it will be helpful for you.
Thanks
GUI Interface - Java Beginners
://
Thanks...GUI Interface Respected sir,
please send me the codimg of basic... and multiplication.
But use classes
javax swing
java awt
java awt.event
no other
Java GUI IndexOf - Java Beginners
Java GUI IndexOf Hello and thank you for having this great site. Here is my problem.
Write a Java GUI application called Index.java that inputs...=new JLabel("Charcter to Search: ");
text=new JTextField(10);
area=new
Convert this code to GUI - Java Beginners
the following code to GUI:-
import java.awt.*;
import java.applet.*;
import...);
}
} hi friend,
We have convert your code into GUI...);
label1.setBorder(BorderFactory.createEmptyBorder(5, 10, 5, 5));
label1.setBorder
java gui
java gui friends... good day..
i have doubt in java gui.
? i created 1 java gui application. That has two text fields jtext1,jtext2.
case: user entered value in first textfield(jtext1) and pressed the enter key . the cursor
Writing a GUI program - Java Beginners
to write the code for the GUI. Could anyone please help? Hi Friend...Writing a GUI program Hello everyone!
I'm trying to write a program...();
app.setVisible(true);
app.pack();
}
}
Thanks
bank account gui
.";
}
}
For more information, visit the following link:...; Transaction>();
I already done with the GUI i just need the code to make the button...]=new TextField(10);
p.add(l[i]);
p.add(t[i]);
}
p.add(but);
p.add(but1
Java GUI
Gui Interface - Java Beginners
Gui Interface hi I have two seperate interfaces in the same projects .
my question is how can I access this interface using a jbutton
(i.e... I have to do to make this work?
Thanks again JFrame frame2 = new
GUI 2 - Java Beginners
GUI 2 How can I modify this code? Please give me a GUI...;GUI Example");pack();show();}public void actionPerformed(ActionEvent event...();}}This will help you out.. Brilliant! The keyEvent works, but the modified code
Java GUI code
Java GUI code Write a GUI program to compute the amount of a certificate of deposit on maturity. The sample data follows:
Amount deposited... static void main(String[] args)
{
new TotalAmount();
}
}
Thanks
Java GUI - Applet
Java GUI HELLO,
i am working on java chat server, i add JFrame and make GUI structure by draging buttons and labels, now i want to insert... to help in solving the problem :
import java.awt.GridLayout;
import
Chat in Java wih GUI
Chat in Java wih GUI Welcome all >> << how is everybody >< i wanna Chat program in java server & client
thanks
plz help me to create gui using Java netbeans
plz help me to create gui using Java netbeans Hi,
I am unable to fetch a particular data from DB.I am using netbeans for creating GUI. If I want.... I am unable to fetch the particular data. Plz help me
Hi Friend
Java GUI
Java GUI difference between swing and applet
Component gui - Java Beginners
://
Thanks...Component gui Can you give me an example of Dialog in java Graphical user interface? Hi friend,
import javax.swing.*;
public
Java GUI - Java Beginners
Java GUI HOW TO ADD ICONS IN JAVA GUI PROGRAMMES
Java scroll pane-java GUI
Java scroll pane-java GUI Dear friends.. Very Good morning.
I have a doubt in my gui application.
Take ex:- My gui application has 1 Jscrollpane... my program....
Thanks dears in advance
GUI convert to celsius program - Java Beginners
GUI convert to celsius program how to write java GUI program.../java/swing/
Thanks...(5, 10, 5, 5));
fahLabel.setBorder(BorderFactory.createEmptyBorder(5, 10, 5, 5
Selecting elements of 2D array with GUI
Selecting elements of 2D array with GUI Hello!
I am building a Java application with GUI (JFrame form) that is supposed to display all...?
Many thanks in advance for your help!
Rafal
java GUI program - Java Beginners
java GUI program java program that creates the following GUI, when user enter data in the
textfield, the input will be displayed in the textarea...://
Thanks.
Amardeep
Java GUI to build a Student Registration Program
Java GUI to build a Student Registration Program Write a program... undergrad courses). I also made a list of 10 students.
I'm a little hesitant to move on. Help! Even if it is just a little
GUI and how to convert a distance - Java Beginners
GUI and how to convert a distance i need help..
how to create a GUI application that can be is used to convert a distance unit in miles into its...();
}
}
Thanks
Convert the code to GUI
Java GUI Class Example Java GUI Class Example
Convert the code to GUI
GUI Java JSP application GUI Java JSP application
Convert the code to GUI
Java and GUI application Example Java and GUI application Example
Java GUI code- creating a circle
Java GUI code- creating a circle My assignment is to write a program..., can someone please help me?
import javax.swing.*;
import java.awt....;
//GUI components
JLabel lClx, lCly, lCircumrx, lCircumry, lRadius
Reading big file in Java
Reading big file in Java How to read a big text file in Java program?
Hi,
Read the complete tutorial at How to read big file line by line in java?
Thanks
Advanced GUI
Advanced GUI I want some effects like contraction when i close a frame or expansion when i open a frame..
Can someone help me in this..
Thanks
Convert the code to GUI
How to create GUI application in Java How to create GUI application in Java
how to save a gui form in core java
how to save a gui form in core java please help me
i am java beginner
how to save a jframe containing jtable and panels in java
thank you
Convert the code to GUI
Java Code to GUI can any one convert My code to GUI code
Netbeans GUI Ribbon
Netbeans GUI Ribbon how to create ribbon task in java GUI using netbeans program for drawing rectangle and circle
Java gui program for drawing rectangle and circle how to write java gui program for drawing rectangle and circle?
there shoud be circle...", "this is a green rectangle" etc.
help me please.. i have to submit it this thursday
Java Gui - Java Beginners
Java Gui HOW TO ADD LINK LABELS IN JAVA PROGRAMMES
GUI problem - Java Beginners
GUI problem how to write java program that use JTextField to input data and JTextField to display the output when user click Display Button?? Handle the actionPerformed event for JButton and try doing something like
Tips & Tricks
Tips & Tricks
Splash
Screen in Java 6.0
Splash screens are standard part of many GUI applications to let the user know about starting
GUI - Java Beginners
GUI testing GUI testing in software testing HiNow, use the code and get the answer.import javax.swing.*;public class DemoTryCatchDialog...;GUI Example");pack();show();}public void actionPerformed(ActionEvent event
big doubt
;html>
<body>
<%@page language="java" import="java.sql.*"%>
<%@page language="java" import="java.io.*"%>
<..." COLOR=blue><MARQUEE WIDTH=100%
BEHAVIOR=SCROLL LOOP=10>
java gui - Java Beginners
java gui i have to create dynamically databse,table, and number of field..inside that field dynamically data can be entered with type of variable..after entering all the dat in different form field label,i should have a button
how to refresh my GUI page
how to refresh my GUI page how to refresh a GUI in java
Magic Matrix in GUI
Magic Matrix in GUI I want program in java GUI contain magic matrix for numbers
Regarding GUI Applications
Regarding GUI Applications How to create a save and open jmenu item in java desktop application
How To Pass data from one GUI to another in java swing
How To Pass data from one GUI to another in java swing I'm new to java and part of our assignment is to build a GUI and display a result set from... this. Thanks in advance.
import java.awt.FlowLayout;
import javax.swing. javax.swing.*;
import java.awt.event.*;
public class MarkStudent {
double
GUI - Java Beginners
Gui - Java Beginners
Login Form with Java GUI using Netbeans
Login Form with Java GUI using Netbeans Hi there!
I'm a beginner in Java. I've created 2 class files:
1) TestAssign.java
2) NewFrame.java
How can I...(JFrame.EXIT_ON_CLOSE);
setVisible(true);
}
}
Thanks in advance
Reading a big file effeciently
without memory error in Java?
What is the best method to read a big file very efficiently?
Thanks
Hi,
Kindly check the program Java Read File Line by Line - Java Tutorial.
This is one of the good tutorial with source code
Rental Code GUI - Java Beginners
Rental Code GUI dear sir...
i would like to ask some code of java GUI form that ask the user will choose
the menu to input
Disk #:
type:
title:
record company:
price:
director:
no. of copies
GUI Interface .Reply me with in hour..Its Urgent - Java Beginners
GUI Interface .Reply me with in hour..Its Urgent Hi ...
Now i am... have to create GUI Interace ..How should i create the Interface. In Existing... a GUI interface with some 6 to 8 Vote symbols and should integrate with the already
Java Big Problem - Java Beginners
Java Big Problem Input the current meter reading and the previous... on Java visit to :
Thanks
Vineet...) {
return Math.round(value*Math.pow(10, places))/Math.pow(10, places
Convert the code to GUI ??
Convert the code to GUI ?? hi >>
can anyone help me to conver this code to GUI ??
/**
* @(#)RegistorClass.java
*
*.
* @author...
// Input/Output operations.
private final int READ = 10;
private final int
Regarding GUI Applications
GUI Applications How to create a save and open jmenu item in java desktop application.
Here is a simple JMenuItem example in java swing through which you can perform open and save operations on File.
import
Convert the code to GUI
Convert the code to GUI can any one convert My code to GUI code... int READ = 10; private final int WRITE = 11; // Load/Store operations... void dump() { System.out.print( " "); for (int i = 0; i < 10; i
Raju?s GUI-API
possible for traditional GUI Platforms such as Java/Swing or
Windows/VC++.
Read...
Raju?s GUI-API
Raju?s GUI-API Ajax reusable GUI Classes are based on ?patent pending?
inventions
How to convert this Java code into GUI?
How to convert this Java code into GUI? import java.util.Scanner;
public class StudentMarks {
double totalMarks;
String grade;
public void setTotalMarks(double totalMarks) {
this.totalMarks = totalMarks
GUI Cash Register system for saloon
GUI Cash Register system for saloon Gui java cash register
I have to make cash register system for saloon.
it will look like this but I dont know how to do it.
!alt text
Example : if customer come to make a hair cut. I
How To Insert A New Record to MS Access table database in GUI
events. I know I'm missing something, I just couldn't figure out where. Thanks in advance for your help. Below is part of my java code.
private class..., I've been working on the actionPerformed part of my java application Big - Java Beginners
java Big Hi
pls observe the following coed:
import...");
l5=new JLabel("Database Operation's ");
t1=new JTextField(10);
t2=new JTextField(10);
t =new JTextField(10);
t3=new
The Big Competition - Java Beginners
The Big Competition Let the Machine understand you
-----------------------------
Each team is asked to implement an application that can execute... summarizing your work.
any one can help me to win
thank you
Jigloo SWT/Swing GUI Builder
Jigloo SWT/Swing GUI Builder
CloudGarden's Jigloo GUI Builder is a plugin for the
Eclipse Java IDE and WebSphere Studio, which allows you to build and manage... instruction to remove the image which is at the top of the stack? please help
How to read big file line by line in java?
is very useful in reading big file line by line in
Java.
In this tutorial we... package.
These two classed will help us in making java program which reads the big...Learn how to write a program in java for reading big text file line by line
Flex SDK GUI
Flex SDK GUI Hi.......
please give me ans of this question..
What classes do you typically extend in the Flex SDK for creating GUI controls and why?
Thanks
SmartClient AJAX GUI System
SmartClient AJAX GUI System
SmartClient is the cross-platform AJAX GUI system
chosen by top... application stack, from rich, skinnable, extensible GUI
components to declarative
10 Tips for Writing Effective Articles
10 Tips for Writing Effective Articles
... be yourself convinced about what you are writing. Few tips that will help you... of a bonus with their paid services or products.
These few simple tips can help
determine the top 10 highest and 10 lowest in java
that contains the codebooks and namesmembers. I want to make 10 the highest rank anyone who borrows a book and who the 10 smallest borrow books. Please help me master...determine the top 10 highest and 10 lowest in java hello,
I have
Java
Very Big Problem - Java Beginners
Very Big Problem Write a 'for' loop to input the current meter reading and the previous meter reading and calculate the electic bill... to solve the probelm.
Thanks
Convert the code to GUI
GUI example for beginners GUI example for beginners sory... operations. private final int READ = 10; private final int WRITE = 11...() { System.out.print( " "); for (int i = 0; i < 10; i
Need help in completing a complex program; Thanks a lot for your help
Need help in completing a complex program; Thanks a lot for your help ... it?
Thanks a ton for your help... no output. So please help me with this too.
And also, I am using Runtime function
GUI framework
GUI framework what do u mean by GUI framework | http://www.roseindia.net/tutorialhelp/comment/4144 | CC-MAIN-2014-41 | refinedweb | 2,537 | 65.12 |
Here is my updated code
sub ReadPolicies { # based on the below command it reads values in ar
+ray
my ($type) = @_;
my @Policies = `/opt/OV/bin/opctemplate -l | grep -i $type | awk '{pri
+nt \$2}'`;
return (@Policies);
}
#MAIN
My @FailoverPolicies = &ReadPolicies(DBSPI);
For (my $i=0; $i < @FailoverPolicies; $i++) {
my $cmd "/opt/OV/bin/opctemplate -e" . " " . "$FailoverPolicies[$i]";
my $output= `$cmd`; # here i expect this command read each value from
+the array and take action, -e to enable
}
[download]
and this works fine but the problem i face is that i see so many <defunct> process on system and i think it is because first command is not completed and second also start and so one and one point this hangs.
Thank You.
In reply to Re^3: execution of command inside for loop fails
by kaka_2
in thread execution of command inside for loop fails
by kaka_2
Yes
No
A crypto-what?
Results (165 votes),
past polls | http://www.perlmonks.org/?parent=1056866;node_id=3333 | CC-MAIN-2014-10 | refinedweb | 158 | 51.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.