text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Interview for 5 + years of Experience in .Net
Today I want to present the Interview Questions that are asked in a CMM - Level -3 organisation in Hyderabad. I want to give detailed description with each and every question with an example , which the Interview was recently held in the CMM -LEVEL -3 COMPANY.
1)How can you Initialize a class only once throughout the Application ?
The Answer is Use Singleton class , Using singleton class in the class constructor we can do this.
using system;
public class singleton
{
private static Singleton instbhushan;
private singleton(){}
public static Singleton Instance
{
get
{
if(instbhushan== null)
{
instbhushan=new singleton();
}
return instbhushan
}
}
}
2) cache vs application
My answer : A cache is used for repeated of requested of the same data from the website, instead of going to Database server, the data temporary maintains at one place i.e cache and fetch from there. A large volume of data can be stored in
cache.
An application variable is used to store all the users and all the sessions, thus application is store for small amount data and where as cache is used for to store extensive data.
In Cache we can define the time duration where as application state can store a limited data (this is my personal view).
3)Can we use a static members in a non static class
My Answer : We can use static members to a non static class
4) Can we use nonstatic members in a static class
My Answer : No, static classes can't be instantiated, so non static members can never be accessed.
5) ref and out keyword
My Answer : The ref keyword is used to call as an argument in a function and it is initialized where as the out keyword is
used for the same purpose but without any initialization.
public class Testbhushan
{
int refval=1
int refsval;
public void Exampletest( ref int refval)
{
}
public void Exampltest(out int refsval)
{
}
Question : delegate with example
My Answer : A delegate is used for reference a method , Encapsulating the methods call from the caller, It can also be called for asynchronously , than interviewer asked can you tell an example of a delegate than i told, Custom paging which dynamically link buttons need to have delegate which we called as event delegate.
lnk.Click += new System.EventHandler(this.lnk_Click);
private void lnk_Click(object sender , System.EventArgs e)
{
}
Sql-server
I have unique Ids in my Table Column , the table will have a similar data in the column name, I want the count of that similar data , How can you do it ?
Ans) Two methods can have 1) using Cursors and 2) while loop
Using cursors we need temptable to store the data and check against each so we can have the count of similar data with
this. | https://www.dotnetspider.com/resources/46311-Interview-for-5-years-of-Experience-in-Net.aspx | CC-MAIN-2022-27 | refinedweb | 465 | 53.04 |
Hello Guys, I would like to ask a question about my script for the rotation of my 2D RectTransform object. Here is my C# Script:
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
public class QTileViz : MonoBehaviour {
public Queuing queuing = new Queuing();
public Image[] qTiles;
public RectTransform[] rTiles;
public Sprite[] tileSprite;
ArrayList qList;
ArrayList rList;
// Use this for initialization
void Start () {
qList = queuing.queue.GetList ();
rList = queuing.queue.GetYRotations ();
}
// Update is called once per frame
public void UpdateViz () {
rList = queuing.queue.GetYRotations ();
print ("List Size:" + rList.Count);
//for (int i = 0; i < rList.Count; i++)
// print ("index " + i + ": " +rList [i]);
for (int i = 0; i < 5; i++) {
qTiles [i].sprite = tileSprite [(int)qList [i]];
print ("index " + i + ": " +rList [i]);
//rTiles [i].Rotate (0, 0, (int)rList [i]);
rTiles [i].Rotate(new Vector3 (0, 0, (int)rList [i]));
}//for
}
}
The values I'm passing to the index at my print statement
print ("index " + i + ": " +rList [i]);
is correct and I'm getting the proper numbers I should get and printed. which means my 'Queue` Implementation is correct. Here's the console output
But when I try to pass it to the RectTransform as the z value, I'm getting wrong results for its rotation.
z
rTiles [i].Rotate(new Vector3 (0, 0, (int)rList [i]));
And this will be the result:
Note that the last value or the last 4th index from my queue should be for the 5th QueueTile GameObject and the last index value from the queue is 0 and I don't even know where I got that 90.00001 and it is not from the index 0 from my queue because the z value on the RectTransform[] just making random numbers that is close to my types of z which is
QueueTile
0
90.00001
index 0
z value
RectTransform[]
0, 90, 180, 270
I hope you can inspect my code and tell me what's going on or is it at Unity problem? I hope you understand. And sorry for my long question. I can't find any other way to explain this. Sorry for bad English.
TIA Cheers!
Answer by TashaSkyUp
·
Mar 15 at 09:26 PM
Just taking a stab here. Do the elements of in rTiles[] already have a Z rotation? The function .Rotate rotates a transform further, it does not set
230 People are following this question.
How to limit only one axis rotation? c#
1
Answer
How do you change your z axis in a movement script.
1
Answer
Torrent rotation to face an object rotating on Z axis only. Define facing axes as X
1
Answer
Problem with halt with rotating on y axis
1
Answer
Having projectile face mouse position
1
Answer | http://answers.unity3d.com/questions/1233175/passing-vector3-values-to-recttransformrotation.html | CC-MAIN-2017-13 | refinedweb | 453 | 73.58 |
First of all, hello, I'm new here.
I have been trying to hack together a number tester, which will test a number to see if it's a prime or not. It should write the result to a file, but it doesn't. I've also been trying to figure out the source of the problem, but I just don't get it.
Here it is, to it's fullest:
Code:// The program should test a number, defined by user-input, if it's a prime number or not. // This through dividing each number below the user-inputted, starting from 2 // to, let the user-input be k, k-1. If all cases return a remainder, the number first inputted // number is a prime. // If none of the cases have a remainder the number is not a prime. #include <iostream> #include <cmath> #include <fstream> using namespace std; int main () { int true_false, quit; float i, k, result; result = fmod(k, i); i = 2; cin >> k; ofstream primefile; primefile.open ("Primeer.txt", ios::ate); if (!primefile.is_open()) { cout << " The file could not be opened. "; } else { while (i = k - 1) { result = fmod(k,i); if (result = 0) { true_false = true_false + 1; break; cout << " The following number isn't a prime: " << k; primefile << " The following number isn't a prime: " << k; primefile.close(); cout << " Succesfull writing to file: " << primefile; } if (result > 0) { true_false = true_false + 0; i++; } } } if (i = k - 1, true_false = 0) { cout << " The following number is a prime: " << k; primefile << " The following number is a prime: " << k; cout << " Succesfull writing to file: " << primefile; } cin >> quit; if (quit == true) { return 0; } }
Hope I didn't screw the formatting up.
Any help would be appreciated. | http://cboard.cprogramming.com/cplusplus-programming/95102-problem-writing-file.html | CC-MAIN-2014-52 | refinedweb | 280 | 69.72 |
Hello,
As a maintainer of cad/opencascade,/cad/opencascade/work/opencascade-7.5.0/src/OSD/OSD_Parallel_TBB.cxx:28:10: fatal error: 'tbb/task_scheduler_init.h' file not found
#include <tbb/task_scheduler_init.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
ninja: build stopped: subcommand failed.
***
Thanks for your contribution,
Best regards,
Ganael.
Just FYI: after the closure of PR 252892 I have updated my machine and I have been able to install onetbb: back on trial for this PR!
Created attachment 221985 [details]
Patch to remove the dependency on devel/tbb
Temporary patch to remove the dependency on devel/tbb.
devel/onetbb should be activated when possible.
Hello Thierry,
Thanks for the patch :)
Do you want me to wait for the big switch or can I commit it earlier ?
(In reply to Ganael LAPLANCHE from comment #3)
Yes, let it wait for the big switch. Meanwhile, a better solution might be found…
OK, no pb :)
Cheers,
Ganael.
Patch committed, thanks! | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252870 | CC-MAIN-2021-21 | refinedweb | 155 | 61.02 |
in reply to
Re^4: Monte Carlo - Coin Toss
in thread Monte Carlo - Coin Toss
repellent:
Sweet! So of course I had to take another look:
sub pascal_tri_row {
my $r = shift;
return () if $r < 0;
my @row = (1) x ($r + 1);
for my $i (1 .. $r - 1)
{
$row[$_] += $row[$_ - 1]
for reverse 1 .. $i;
}
return @row;
}
sub robo_2 {
my $row = shift;
my @cols = (1);
++$row;
$cols[$_] = $cols[$_-1] * ($row-$_)/$_ for 1 .. $row-1;
return @cols;
}
sub triangle {
my $numTosses = shift;
my @triangle = (0, 1, 0);
for (1 .. $numTosses) {
my @newTriangle=(0);
push @newTriangle, $triangle[$_]+$triangle[$_+1] for 0 .. $#tr
+iangle-1;
push @newTriangle, 0;
@triangle = @newTriangle;
}
return @triangle[1..$#triangle-1];
}
use Benchmark qw(cmpthese);
print "robo_1: ", join(" ",triangle(8)), "\n";
print "repel1: ", join(" ",pascal_tri_row(8)), "\n";
print "robo_2: ", join(" ",robo_2(8)), "\n";
cmpthese -1, {
robo_tri => sub { triangle(32) },
repel_tri => sub { pascal_tri_row(32) },
robo_2 => sub { robo_2(32) },
};
[download]
First I put in a trace so I could be sure I wasn't generating useless values. Next, I couldn't bear the zeroes you had to add to your function to match my original return value. They were just sentinel values to simplify the calculation, anyway. So I fixed the original to return only the values of interest. Finally, I had to squeeze a little more speed out of it:
$ perl 892898.pl
robo_1: 1 8 28 56 70 56 28 8 1
repel1: 1 8 28 56 70 56 28 8 1
robo_2: 1 8 28 56 70 56 28 8 1
Rate robo_tri repel_tri robo_2
robo_tri 2196/s -- -42% -93%
repel_tri 3775/s 72% -- -88%
robo_2 31946/s 1355% 746% --
[download]
You should've read a bit further down in the wikipedia article you linked. It had a much better algorithm for calculating the coefficients of any row. It saved a nested loop. ;^)
...roboticus
When your only tool is a hammer, all problems look like your thumb.
Hell yes!
Definitely not
I guess so
I guess not
Results (51 votes),
past polls | http://www.perlmonks.org/?node_id=892904 | CC-MAIN-2014-52 | refinedweb | 341 | 69.92 |
GoLang – Consuming Oracle REST API from an Oracle Cloud Database)
Does anyone write code to access data in a database anymore, and by code I mean SQL? The answer to this question is ‘It Depends’, just like everything in IT.
Using REST APIs is very common for accessing processing data with a Database. From using an API to retrieve data, to using a slightly different API to insert data, and using other typical REST functions to perform your typical CRUD operations. Using REST APIs allows developers to focus on write efficient applications in a particular application, instead of having to swap between their programming language and SQL. In later cases most developers are not expert SQL developer or know how to work efficiently with the data. Therefore leave the SQL and procedural coding to those who are good at that, and then expose the data and their code via REST APIs. The end result is efficient SQL and Database coding, and efficient application coding. This is a win-win for everyone.
I’ve written before about creating REST APIs in an Oracle Cloud Database (DBaaS and Autonomous). In these writings I’ve shown how to use the in-database machine learning features and to use REST APIs to create an interface to the Machine Learning models. These models can be used to to score new data, making a machine learning prediction. The data being used for the prediction doesn’t have to exist in the database, instead the database is being used as a machine learning scoring engine, accessed using a REST API.
Check out an article I wrote about this and creating a REST API for an in-database machine learning model, for Oracle Magazine.
In that article I showed how easy it was to use the in-database machine model using Python.
Python has a huge fan and user base, but some of the challenges with Python is with performance, as it is an interrupted language. Don’t get be wrong on this, as lots of work has gone into making Python more efficient. But in some scenarios it just isn’t fast enough. In does scenarios people will switch into using other quicker to execute languages such as C, C++, Java and GoLang.
Here is the GoLang code to call the in-database machine learning model and process the returned data.
import ( "bytes" "encoding/json" "fmt" "io/ioutil" "net/http" "os" ) func main() { fmt.Println("---------------------------------------------------") fmt.Println("Starting Demo - Calling Oracle in-database ML Model") fmt.Println("") // Define variables for REST API and parameter for first prediction rest_api = "<full REST API>" // This wine is Bad a_country := "Portugal" a_province := "Douro" a_variety := "Portuguese Red" a_price := "30" // call the REST API adding in the parameters response, err := http.Get(rest_api +"/"+ a_country +"/"+ a_province +"/"+ a_variety +"/"+ a_price) if err != nil { // an error has occurred. Exit fmt.Printf("The HTTP request failed with error :: %s\n", err) os.Exit(1) } else { // we got data! Now extract it and print to screen responseData, _ := ioutil.ReadAll(response.Body) fmt.Println(string(responseData)) } response.Body.Close() // Lets do call it again with a different set of parameters // This wine is Good - same details except the price is different a_price := "31" // call the REST API adding in the parameters response, err := http.Get(rest_api +"/"+ a_country +"/"+ a_province +"/"+ a_variety +"/"+ a_price) if err != nil { // an error has occurred. Exit fmt.Printf("The HTTP request failed with error :: %s\n", err) os.Exit(1) } else { responseData, _ := ioutil.ReadAll(response.Body) fmt.Println(string(responseData)) } defer response.Body.Close() // All done! fmt.Println("") fmt.Println("...Finished Demo ...") fmt.Println("---------------------------------------------------") } | https://oralytics.com/2020/05/26/golang-consuming-oracle-rest-api-from-an-oracle-cloud-database/ | CC-MAIN-2022-05 | refinedweb | 598 | 55.64 |
Spire’s type classes abstract over very basic operators like
+ and
*. These operations are normally very fast. This means that any
extra work that happens on a per-operation basis (like boxing or
object allocation) will cause generic code to be slower than its
direct equivalent.
Efficient, generic numeric programming is Spire’s raison d’être. We have developed a set of Ops macros to avoid unnecessary object instantiations at compile-time. This post explains how, and illustrates how you can use these macros in your code!
When using type classes in Scala, we rely on implicit conversions to “add” operators to an otherwise generic type.
In this example,
A is the generic type,
Ordering is the type
class, and
> is the implicit operator.
foo1 is the code that the
programmer writes, and
foo4 is a translation of that code after
implicits are resolved, and syntactic sugar is expanded.
import scala.math.Ordering import Ordering.Implicits._ def foo1[A: Ordering](x: A, y: A): A = x > y def foo2[A](x: A, y: A)(implicit ev: Ordering[A]): A = x > y def foo3[A](x: A, y: A)(implicit ev: Ordering[A]): A = infixOrderingOps[A](x)(ev) > y def foo4[A](x: A, y: A)(implicit ev: Ordering[A]): A = new ev.Ops(x) > y
(This is actually slightly wrong. The expansion to
foo4 won’t happen
until runtime, when
infixOrderingOps is called. But it helps
illustrate the point.)
Notice that we instantiate an
ev.Ops instance for every call to
>. This is not a big deal in many cases, but for a call that is
normally quite fast it will add up when done many (e.g. millions) of
times.
It is possible to work around this:
def bar[A](x: A, y: A)(implicit ev: Ordering[A]): A = ev.gt(x, y)
The
ev parameter contains the method we actually want (
gt), so
instead of instantiating
ev.Ops this code calls
ev.gt directly.
But this approach is ugly. Compare these two methods:
def qux1[A: Field](x: A, y: A): A = ((x pow 2) + (y pow 2)).sqrt def qux2[A](x: A, y: A)(implicit ev: Field[A]): A = ev.sqrt(ev.plus(ev.pow(x, 2), ev.pow(y, 2)))
If you have trouble reading
qux2, you are not alone.
At this point, it looks like we can either write clean, readable code
(
qux1), or code defensively to avoid object allocations (
qux2).
Most programmers will just choose one or the other (probably the
former) and go on with their lives.
However, since this issue affects Spire deeply, we spent a bit more time looking at this problem to see what could be done.
Let’s look at another example, to compare how the “nice” and “fast” code snippets look after implicits are resolved:
def niceBefore[A: Ring](x: A, y: A): A = (x + y) * z def niceAfter[A](x: A, y: A)(implicit ev: Ring[A]): A = new RingOps(new RingOps(x)(ev).+(y))(ev).*(z) def fast[A](x: A, y: A)(implicit ev: Ring[A]): A = ev.times(ev.plus(x, y), z)
As we can see,
niceAfter and
fast are actually quite similar. If
we wanted to transform
niceAfter into
fast, we’d just have to:
Figure out the appropriate name for symbolic operators. In this
example,
+ becomes
plus and
* becomes
times.
Rewrite the object instantiation and method call, calling the
method on
ev instead and passing
x and
y as arguments. In
this example,
new Ops(x)(ev).foo(y) becomes
ev.foo(x, y).
In a nutshell, this transformation is what Spire’s Ops macros do.
Your project must use Scala 2.10+ to be able to use macros.
To use Spire’s Ops macros, you’ll need to depend on the
spire-macros
package. If you use SBT, you can do this by adding the following line
to
build.sbt:
libraryDependencies += "org.spire-math" %% "spire-macros" % "0.6.1"
You will also need to enable macros at the declaration site of your ops classes:
import scala.language.experimental.macros
Consider
Sized, a type class that abstracts over the notion of
having a size. Type class instances for
Char,
Map, and
List are
provided in the companion object. Of course, users can also provide
their own instances.
Here’s the code:
trait Sized[A] { def size(a: A): Int def isEmpty(a: A): Boolean = size(a) == 0 def nonEmpty(a: A): Boolean = !isEmpty(a) def sizeCompare(x: A, y: A): Int = size(x) compare size(y) } object Sized { implicit val charSized = new Sized[Char] { def size(a: Char): Int = a.toInt } implicit def mapSized[K, V] = new Sized[Map[K, V]] { def size(a: Map[K, V]): Int = a.size } implicit def listSized[A] = new Sized[List[A]] { def size(a: List[A]): Int = a.length override def isEmpty(a: List[A]): Boolean = a.isEmpty override def sizeCompare(x: List[A], y: List[A]): Int = (x, y) match { case (Nil, Nil) => 0 case (Nil, _) => -1 case (_, Nil) => 1 case (_ :: xt, _ :: yt) => sizeCompare(xt, yt) } } }
(Notice that
Sized[List[A]] overrides some of the “default”
implementations to be more efficient, since taking the full length of
a list is an O(n) operation.)
We’d like to be able to call these methods directly on a generic type
A when we have an implicit instance of
Sized[A] available. So
let’s define a
SizedOps class, using Spire’s Ops macros:
import spire.macrosk.Ops import scala.language.experimental.macros object Implicits { implicit class SizedOps[A: Sized](lhs: A) { def size(): Int = macro Ops.unop[Int] def isEmpty(): Boolean = macro Ops.unop[Boolean] def nonEmpty(): Boolean = macro Ops.unop[Boolean] def sizeCompare(rhs: A): Int = macro Ops.binop[A, Int] } }
That’s it!
Here’s what it would look like to use this type class:
import Implicits._ def findSmallest[A: Sized](as: Iterable[A]): A = as.reduceLeft { (x, y) => if ((x sizeCompare y) < 0) x else y } def compact[A: Sized](as: Vector[A]): Vector[A] = as.filter(_.nonEmpty) def totalSize[A: Sized](as: Seq[A]): Int = as.foldLeft(0)(_ + _.size)
Not bad, eh?
Of course, there’s always some fine-print.
In this case, the implicit class must use the same parameter names
as above. The constructor parameter to
SizedOps must be called
lhs and the method parameter (if any) must be called
rhs. Also, unary operators (methods that take no parameters, like
size) must have parenthesis.
How the macros handle classes with multiple constructor parameters, or multiple method parameters? They don’t. We haven’t needed to support these kinds of exotic classes, but it would probably be easy to extend Spire’s Ops macros to support other shapes as well.
If you fail to follow these rules, or if your class has the wrong shape, your code will fail to compile. So don’t worry. If your code compiles, it means you got it right!
The previous example illustrates rewriting method calls to avoid allocations, but what about mapping symbolic operators to method names?
Here’s an example showing the mapping from
* to
times:
trait CanMultiply[A] { def times(x: A, y: A): A } object Implicits { implicit class MultiplyOps[A: CanMultiply](lhs: A) { def *(rhs: A): A = macro Ops.binop[A, A] } } object Example { import Implicits._ def gak[A: CanMultiply](a: A, as: List[A]): A = as.foldLeft(a)(_ * _) } }
Currently, the Ops macros have a large (but Spire-specific) mapping from symbols to names. However, your project may want to use different names (or different symbols). What then?
For now, you are out of luck. In Spire 0.7.0, we plan to make it possible to use your own mapping. This should make it easier for other libraries that make heavy use of implicit symbolic operators (e.g. Scalaz) to use these macros as well.
You might wonder how the Ops macros interact with specialization. Fortunately, macros are expanded before the specialization phase. This means you don’t need to worry about it! If your type class is specialized, and you invoke the implicit from a specialized (or non-generic) context, the result will be a specialized call.
(Of course, using Scala’s specialization is tricky, and deserves its own blog post. The good news is that type classes are some of the easiest structures to specialize correctly in Scala.)
Evaluating the macros at compile-time also means that if there are problems with the macro, you’ll find out about those at compile-time as well. While we expect that many projects will benefit from the Ops macros, they were designed specifically for Spire so it’s possible that your project will discover problems, or need new features.
If you do end up using these macros, let us know how they work for you. If you have problems, please open an issue, and if you have bug fixes (or new features) feel free to open a pull request!
We are used to thinking about abstractions having a cost. So we often end up doing mental accounting: “Is it worth making this generic? Can I afford this syntactic sugar? What will the runtime impact of this code be?” These condition us to expect that code can either be beautiful or fast, but not both.
By removing the cost of implicit object instantiation, Spire’s Ops macros raise the abstraction ceiling. They allow us to make free use of type classes without compromising performance. Our goal is to close the gap between direct and generic performance, and to encourage the widest possible use of generic types and type classes in Scala.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2013/10/13/spires-ops-macros.html | CC-MAIN-2019-13 | refinedweb | 1,634 | 66.54 |
This attribute was introduced in November. Somehow it get's injected in my generated cs files for my xaml views but the compiler has no clue where to find the attribute. I suppose I have a versioning problem. Is it possible to get the versions where this was introduced in tools and in the libraries please?
What I do is that I simply remove the line from the generated .cs file for the view
Answers
Could it be due to this line in my AssemblyInfo.cs? I didn't have that before...
[assembly: XamlCompilation(XamlCompilationOptions.Compile)]
Getting the same issue. Did you resolve it?
The same for me. I'm working with Xamarin.Forms. I've just updated all to the latest versions (Android SDK, all founded by Xamarin Studio updates , all packages in solution) and now I can't build the project.
Ok, here is what I did to fix it.
I think the key is to remove .bin and .obj folders and rebuild the solution, because the generated .g.cs files are located somewhere in that folders, and they keep referencing on something inappropriate.
After removal they will be regenerated without the error.
@Cuckooshka thank you it works
Cleaning of bin and obj did not work.
Error CS0234: The type or namespace name
XamlFilePathAttribute' does not exist in the namespaceXamarin.Forms.Xaml'. Are you missing an assembly reference?
For Xamarin studio we can use this fix.
But im working on Jenkins automated build system. Every time i'm getting this error during release build.
Any other solution for this?
Update Xamarin.Forms to version 2.3.4.192-pre2 from nuget. It solves the issue
With VStudio you need click in Solution Clean after Click in Compile Solution.
con visual studio es necesario limpiar el proyecto y despues compilar desde la misma solucion, no necesite regresar la version.
saludos
What I do is that I simply remove the line from the generated .cs file for the view
Hi SylvainGravel,
Removing the line worked for me.
Thanks,
Akash
Remove all .bin and .obj folders in Portable project and Try.
It works for me.
Thanks
Removing bin's and obj's, clean and build does not work for me. Removing the line from the xaml.g.cs works but I have 148 of these errors and I clean and rebuild they get generated again. Impractical. Has anyone got a permanent solution?
Update your Forms version, it should be fixed by now
The type or namespace name
XamlFilePathAttribute' does not exist in the namespaceXamarin.Forms.Xaml'. Are you missing an assembly reference?
Update all projects version of Xamarin Forms with the same version resolve this issue.
Thanks, It worked.
Clean .obj and .bin folder. Restart VS in Admin mode and Build.
For me, this worked (VS 2017)
Not sure whether all those steps were requred. Fact is, now it works. Note: the probem does usually occur if I load sample projects from Github and try to compile them for the first time. Maybe a version clash if my Xamarin version differs from the original author's? | https://forums.xamarin.com/discussion/comment/248473/ | CC-MAIN-2018-26 | refinedweb | 515 | 69.07 |
This article is written by Steffen Nissen and was originally published in the March 2005 issue of the Software 2.0 magazine. You can find more articles at the SDJ website.
For years, the Hollywood science fiction films such as I, Robot have portrayed artificial intelligence (AI) as a harbinger of Armageddon. Nothing, however, could be farther from the truth. While Hollywood regales us with horror stories of our imminent demise, people with no interest in the extinction of our species have been harnessing AI to make our lives easier, more productive, longer, and generally better.
The robots in the I, Robot film have an artificial brain based on a network of artificial neurons; this artificial neural network (ANN) is built to model the human brain's own neural network. The Fast Artificial Neural Network (FANN) library is an ANN library, which can be used from C, C++, PHP, Python, Delphi, and Mathematica, and although it cannot create Hollywood magic, it is still a powerful tool for software developers. ANNs can be used in areas as diverse as creating more appealing game-play in computer games, identifying objects in images, and helping the stock brokers predict trends of the ever-changing stock market.
Artificial Intelligence.
ANNs apply the principle of function approximation by example, meaning that they learn a function by looking at examples of this function. One of the simplest examples is an ANN learning the XOR function, but it could just as easily be learning to determine the language of a text, or whether there is a tumour visible in an X-ray image.
If an ANN is to be able to learn a problem, it must be defined as a function with a set of input and output variables supported by examples of how this function should work. A problem like the XOR function is already defined as a function with two binary input variables and a binary output variable, and with the examples which are defined by the results of four different input patterns. However, there are more complicated problems which can be more difficult to define as functions. The input variables to the problem of finding a tumour in an X-ray image could be the pixel values of the image, but they could also be some values extracted from the image. The output could then either be a binary value or a floating-point value representing the probability of a tumour in the image. In ANNs, this floating-point value would normally be between 0 and 1, inclusive.
Figure 1. An ANN with four input neurons, a hidden layer, and four output neurons.
A function approximator like an ANN can be viewed as a black box, and when it comes to FANN, this is more or less all you will need to know. However, basic knowledge of how the human brain operates is needed to understand how ANNs work.
The human brain is a highly complicated system which is capable of solving very complex problems. The brain consists of many different elements, but one of its most important building blocks is the neuron, of which it contains approximately 1011. These neurons are connected by around 1015 connections, creating a huge neural network. Neurons send impulses to each other through the connections and these impulses make the brain work. The neural network also receives impulses from the five senses, and sends out impulses to muscles to achieve motion or speech.
The individual neuron can be seen as an input-output machine which waits for impulses from the surrounding neurons and, when it has received enough impulses, it sends out an impulse to other neurons.
Artificial neurons are similar to their biological counterparts. They have input connections which are summed together to determine the strength of their output, which is the result of the sum being fed into an activation function. Though many activation functions exist, the most common is the sigmoid activation function, which outputs a number between 0 (for low input values) and 1 (for high input values). The resultant of this function is then passed as the input to other neurons through more connections, each of which are weighted. These weights determine the behaviour of the network.
In the human brain, the neurons are connected in a seemingly random order and send impulses asynchronously. If we wanted to model a brain, this might be the way to organise an ANN, but since we primarily want to create a function approximator, ANNs are usually not organised like this.
When we create ANNs, the neurons are usually ordered in layers with connections going between the layers. The first layer contains the input neurons and the last layer contains the output neurons. These input and output neurons represent the input and output variables of the function that we want to approximate. Between the input and the output layer, a number of hidden layers exist and the connections (and weights) to and from these hidden layers determine how well the ANN performs. When an ANN is learning to approximate a function, it is shown examples of how the function works, and the internal weights in the ANN are slowly adjusted so as to produce the same output as in the examples. The hope is that when the ANN is shown a new set of input variables, it will give a correct output. Therefore, if an ANN is expected to learn to spot a tumour in an X-ray image, it will be shown many X-ray images containing tumours, and many X-ray images containing healthy tissues. After a period of training with these images, the weights in the ANN should hopefully contain information which will allow it to positively identify tumours in X-ray images that it has not seen during the training.
The Internet has made global communication a part of many people's lives, but it has also given rise to the problem that everyone does not speak the same language. Translation tools can help bridge this gap, but in order for such tools to work, they need to know in what language a passage of text is written. One way to determine this is by examining the frequency of letters occurring in a text. While this seems like a very naïve approach to language detection, it has proven to be very effective. For many European languages, it is enough to look at the frequencies of the letters A to Z, even though some languages also use other letters as well. Easily enough, the FANN library can be used to make a small program that determines the language of a text file. The ANN used should have an input neuron for each of the 26 letters, and an output neuron for each of the languages. But first, a small program must be made for measuring the frequency of the letters in a text file.
Listing 1 will generate letter frequencies for a file and output them in a format that can be used to generate a training file for the FANN library. Training files for the FANN library must consist of a line containing the input values, followed by a line containing the output values. If we wish to distinguish between three different languages (English, French and Polish), we could choose to represent this by allocating one output variable with a value of 0 for English, 0.5 for French, and 1 for Polish. Neural networks are, however, known to perform better if an output variable is allocated for each language, and that it is set to 1 for the correct language and 0 otherwise.
Listing 1. Program that calculates the frequencies of the letters A-Z in a text file.
#include <vector>
#include <fstream>
#include <iostream>
#include <ctype.h>
void error(const char* p, const char* p2 = "")
{
std::cerr << p << ' ' << p2 << std::endl;
std::exit(1);
}
void generate_frequencies(const char *filename, float *frequencies)
{
std::ifstream infile(filename);
if(!infile) error("Cannot open input file", filename);
std::vector<unsigned int> letter_count(26, 0);
unsigned int num_characters = 0;
char c;
while(infile.get(c)){
c = tolower(c);
if(c >= 'a' && c <= 'z'){
letter_count[c - 'a']++;
num_characters++;
}
}
if(!infile.eof()) error("Something strange happened");
for(unsigned int i = 0; i != 26; i++){
frequencies[i] = letter_count[i]/(double)num_characters;
}
}
int main(int argc, char* argv[])
{
if(argc != 2) error("Remember to specify an input file");
float frequencies[26];
generate_frequencies(argv[1], frequencies);
for(unsigned int i = 0; i != 26; i++){
std::cout << frequencies[i] << ' ';
}
std::cout << std::endl;
return 0;
}
With this small program at hand, a training file containing letter frequencies can be generated for texts written in the different languages. The ANN will, of course, be better at distinguishing the languages if frequencies for many different texts are available in the training file, but for this small example, 3-4 texts in each language should be enough. Listing 2 shows a pre-generated training file using four text files for each of the three languages, and Figure 2 shows a graphical representation of the frequencies in the file. A thorough inspection of this file shows clear trends: English has more H's than the other two languages, French has almost no K's, and Polish has more W's and Z's than the other languages. The training file only uses letters in the A to Z range, but since a language like Polish uses letters like Ł, Ą, and Ę which are not used in the other two languages, a more precise ANN could be made by adding input neurons for these letters as well. When only comparing three languages, there is, however, no need for these added letters since the remaining letters contain enough information to classify the languages correctly, but if the ANN were to classify hundreds of different languages, more letters would be required.
Listing 2. The first part of the training file with character frequencies for English, French, and Polish, the first line is a header telling that there are 12 training patterns consisting of 26 inputs and 3 outputs.
12 26 3
0.103 0.016 0.054 0.060 0.113 0.010 0.010 0.048 0.056
0.003 0.010 0.035 0.014 0.065 0.075 0.013 0.000 0.051
0.083 0.111 0.030 0.008 0.019 0.000 0.016 0.000
1 0 0
0.076 0.010 0.022 0.039 0.151 0.013 0.009 0.009 0.081
0.001 0.000 0.058 0.024 0.074 0.061 0.030 0.011 0.069
0.100 0.074 0.059 0.015 0.000 0.009 0.003 0.003
0 1 0
0.088 0.016 0.030 0.034 0.089 0.004 0.011 0.023 0.071
0.032 0.030 0.025 0.047 0.058 0.093 0.040 0.000 0.062
0.044 0.035 0.039 0.002 0.044 0.000 0.037 0.046
0 0 1
0.078 0.013 0.043 0.043 0.113 0.024 0.023 0.041 0.068
0.000 0.005 0.045 0.024 0.069 0.095 0.020 0.001 0.061
0.080 0.090 0.029 0.015 0.014 0.000 0.008 0.000
1 0 0
0.061 0.005 0.028 0.040 0.161 0.019 0.010 0.010 0.066
0.016 0.000 0.035 0.028 0.092 0.061 0.031 0.019 0.059
0.101 0.064 0.076 0.016 0.000 0.002 0.002 0.000
0 1 0
0.092 0.016 0.038 0.025 0.083 0.000 0.015 0.009 0.087
0.030 0.040 0.032 0.033 0.063 0.085 0.033 0.000 0.049
0.053 0.033 0.025 0.000 0.053 0.000 0.038 0.067
0 0 1
...
Figure 2. A bar chart of the average frequencies in English, French, and Polish.
With a training file like this, it is very easy to create a program using FANN which can be used to train an ANN to distinguish between the three languages. Listing 2 shows just how simply this can be done with FANN. This program uses four FANN functions fann_create, fann_train_on_file, fann_save, and fann_destroy. The function struct fann* fann_create(float connection_rate, float learning_rate, unsigned int num_layers, ...) is used to create an ANN, where the connection_rate parameter can be used to create an ANN that is not fully connected, although fully connected ANNs are normally preferred, and the learning_rate is used to specify how aggressive the learning algorithm should be (only relevant for some learning algorithms). The last parameters for the function are used to define the layout of the layers in the ANN. In this case, an ANN with three layers (one input, one hidden, and one output) has been chosen. The input layer has 26 neurons (one for each letter), the output layer has three neurons (one for each language), and the hidden layer has 13 neurons. The number of layers and number of neurons in the hidden layer has been selected experimentally, as there is really no easy way of determining these values. It helps, however, to remember that the ANN learns by adjusting the weights, so if an ANN contains more neurons and thereby also more weights, it can learn more complicated problems. Having too many weights can also be a problem, since learning can be more difficult and there is also a chance that the ANN will learn specific features of the input variables instead of general patterns which can be extrapolated to other data sets. In order for an ANN to accurately classify data not in the training set, this ability to generalise is crucial – without it, the ANN will be unable to distinguish frequencies that it has not been trained with.
fann_create
fann_train_on_file
fann_save
fann_destroy
struct fann* fann_create(float connection_rate, float learning_rate, unsigned int num_layers, ...)
connection_rate
learning_rate
Listing 3. A program that trains an ANN to learn to distinguish between languages.
#include "fann.h"
int main()
{
struct fann *ann = fann_create(1, 0.7, 3, 26, 13, 3);
fann_train_on_file(ann, "frequencies.data", 200, 10, 0.0001);
fann_save(ann, "language_classify.net");
fann_destroy(ann);
return 0;
}
The void fann_train_on_file(struct fann *ann, char *filename, unsigned int max_epochs, unsigned int epochs_between_reports, float desired_error) function trains the ANN. The training is done by continually adjusting the weights so that the output of the ANN matches the output in the training file. One cycle where the weights are adjusted to match the output in the training file is called an epoch. In this example, the maximum number of epochs have been set to 200, and a status report is printed every 10 epochs. When measuring how close an ANN matches the desired output, the mean square error is usually used. The mean square error is the mean value of the squared difference between the actual and the desired output of the ANN, for individual training patterns. A small mean square error means a close match of the desired output.
void fann_train_on_file(struct fann *ann, char *filename, unsigned int max_epochs, unsigned int epochs_between_reports, float desired_error)
When the program in Listing 2 is run, the ANN will be trained and some status information (see Listing 4) will be printed to make it easier to monitor progress during training. After training, the ANN could be used directly to determine which language a text is written in, but it is usually desirable to keep training and execution in two different programs, so that the more time-consuming training needs only be done only once. For this reason, Listing 2 simply saves the ANN to a file that can be loaded by another program.
Listing 4. Output from FANN during training.
Max epochs 200. Desired error: 0.0001000000
Epochs 1. Current error: 0.7464869022
Epochs 10. Current error: 0.7226278782
Epochs 20. Current error: 0.6682052612
Epochs 30. Current error: 0.6573708057
Epochs 40. Current error: 0.5314316154
Epochs 50. Current error: 0.0589125119
Epochs 57. Current error: 0.0000702030
The small program in Listing 5 loads the saved ANN and uses it to classify a text as English, French, or Polish. When tested with texts in the three languages found on the Internet, it can properly classify texts as short as only a few sentences. Although this method for distinguishing between languages is not bullet-proof, I was not able to find a single text that could be classified incorrectly.
Listing 5. A program classifying a text as written in one of the three languages (The program uses some functions defined in Listing 1).
int main(int argc, char* argv[])
{
if(argc != 2) error("Remember to specify an input file");
struct fann *ann = fann_create_from_file("language_classify.net");
float frequencies[26];
generate_frequencies(argv[1], frequencies);
float *output = fann_run(ann, frequencies);
std::cout << "English: " << output[0] << std::endl
<< "French : " << output[1] << std::endl
<< "Polish : " << output[2] << std::endl;
return 0;
}
The language classification example shows just how easily the FANN library can be applied to solve simple, everyday computer science problems which would be much more difficult to solve using other methods. Unfortunately, not all problems can be solved this easily, and when working with ANNs, one often finds oneself in a situation in which it is very difficult to train the ANN to give the correct results. Sometimes, this is because the problem simply cannot be solved by ANNs, but often, the training can be helped by tweaking the FANN library settings.
The most important factor when training an ANN is the size of the ANN. This can only be set experimentally, but knowledge of the problem will often help giving good guesses. With a reasonably sized ANN, the training can be done in many different ways. The FANN library supports several different training algorithms, and the default algorithm (FANN_TRAIN_RPROP) might not always be the best-suited for a specific problem. If this is the case, the fann_set_training_algorithm function can be used to change the training algorithm.
FANN_TRAIN_RPROP
fann_set_training_algorithm
In version 1.2.0 of the FANN library, there are four different training algorithms available, all of which use some sort of back propagation. Back-propagation algorithms change the weights by propagating the error backwards from the output layer to the input layer while adjusting the weights. The back-propagated error value could either be an error calculated for a single training pattern (incremental), or it could be a sum of errors from the entire training file (batch). FANN_TRAIN_INCREMENTAL implements an incremental training algorithm which alters the weights after each training pattern. The advantage of such a training algorithm is that the weights are being altered many times during each epoch, and since each training pattern alters the weights in slightly different directions, the training will not easily get stuck in local minima – states in which all small changes in the weights only make the mean square error worse, even though the optimal solution may have not yet been found.
FANN_TRAIN_INCREMENTAL
FANN_TRAIN_BATCH, FANN_TRAIN_RPROP, and FANN_TRAIN_QUICKPROP are all examples of batch-training algorithms which alter the weight after calculating the errors for an entire training set. The advantage of these algorithms is that they can make use of global optimisation information which is not available to incremental training algorithms. This can, however, mean that some of the finer points of the individual training patterns are being missed. There is no clear answer to the question which training algorithm is the best. One of the advanced batch-training algorithms like rprop or quickprop training is usually the best solution. Sometimes, however, incremental training is more optimal – especially if many training patterns are available. In the language training example, the most optimal training algorithm is the default rprop one, which reached the desired mean square error value after just 57 epochs. The incremental training algorithm needed 8108 epochs to reach the same result, while the batch training algorithm needed 91985 epochs. The quickprop training algorithm had more problems, and at first, it failed altogether at reaching the desired error value, but after tweaking the decay of the quickprop algorithm, it reached the desired error after 662 epochs. The decay of the quickprop algorithm is a parameter which is used to control how aggressive the quickprop training algorithm is, and it can be altered by the fann_set_quickprop_decay function. Other fann_set_... functions can also be used to set additional parameters for the individual training algorithms, although some of these parameters can be a bit difficult to tweak without knowledge of how the individual algorithms work.
FANN_TRAIN_BATCH
FANN_TRAIN_QUICKPROP
fann_set_quickprop_decay
fann_set_...
One parameter, which is independent of the training algorithm, can however be tweaked rather easily – the steepness of the activation function. Activation function is the function that determines when the output should be close to 0 and when it should be close to 1, and the steepness of this function determines how soft or hard the transition from 0 to 1 should be. If the steepness is set to a high value, the training algorithm will converge faster to the extreme values of 0 and 1, which will make training faster, for an e.g., the language classification problem. However, if the steepness is set to a low value, it is easier to train an ANN that requires fractional output, like e.g., an ANN that should be trained to find the direction of a line in an image. For setting the steepness of the activation function, FANN provides two functions: fann_set_activation_steepness_hidden and fann_set_activation_steepness_output. There are functions because it is often desirable to have different steepness for the hidden layers and for the output layer.
fann_set_activation_steepness_hidden
fann_set_activation_steepness_output
Figure 3. A graph showing the sigmoid activation function for the steepness of 0.25, 0.50, and 1.00.
The language identification problem belongs to a special kind of function approximation problems known as classification problems. Classification problems have one output neuron per classification, and in each training pattern, precisely one of these outputs must be 1. A more general function approximation problem is where the outputs are fractional values. This could, e.g., be approximating the distance to an object viewed by a camera, or even the energy consumption of a house. These problems could of course be combined with classification problems, so there could be a classification problem of identifying the kind of object in an image and a problem of approximating the distance to the object. Often, this can be done by a single ANN, but sometimes it might be a good idea to keep the two problems separate, and e.g., have an ANN which classifies the object, and an ANN for each of the different objects which approximates the distance to the object.
Another kind of approximation problems are the time-series problems, approximating functions which evolve over time. A well known time-series problem is predicting how many sunspots there will be in a year by looking at historical data. Normal functions have an x-value as an input and a y-value as an output, and the sunspot problem could also be defined like this, with the year as the x-value and the number of sun spots as the y-value. This has, however, proved not to be the best way of solving such problems. Time-series problems can be approximated by using a period of time as input and then using the next time step as output. If the period is set to 10 years, the ANN could be trained with all the 10-year periods where historical data exists, and it could then approximate the number of sunspots in 2005 by using the number of sun spots in 1995 – 2004 as inputs. This approach means that each set of historical data is used in several training patterns, e.g., the number of sunspots for 1980 is used in training patterns with 1981 – 1990 as outputs. This approach also means that the number of sunspots cannot be directly approximated for 2010 without first-approximating 2005 – 2009, which in turn will mean that half of the input for calculating 2010 will be approximated data and that the approximation for 2010 will not be as precise as the approximation for 2005. For this reason, time-series prediction is only well-fitted for predicting things in the near future.
Time-series prediction can also be used to introduce memory in controllers for robots etc. This could, e.g., be done by giving the direction and speed from the last two time steps as input to the third time step, in addition to other inputs from sensors or cameras. The major problem of this approach is, however, that training data can be very difficult to produce, since each training pattern must also include history.
Lots of tricks can be used to make FANN train and execute faster and with greater precision. A simple trick which can be used to make training faster and more precise is to use input and output values in the range -1 to 1 as opposed to 0 to 1. This can be done by changing the values in the training file and using fann_set_activation_function_hidden and fann_set_activation_function_output to change the activation function to FANN_SIGMOID_SYMMETRIC, which has outputs in the range of -1 and 1 instead of 0 and 1. This trick works because 0 values in ANNs have an unfortunate feature that no matter which value the weight has, the output will still be 0. There are, of course, countermeasures in FANN to prevent this from becoming a big problem; however, this trick has been proved to reduce training time. The fann_set_activation_function_output can also be used to change the activation function to the FANN_LINEAR activation function which is unbounded and can therefore be used to create ANNs with arbitrary outputs.
fann_set_activation_function_hidden
fann_set_activation_function_output
FANN_SIGMOID_SYMMETRIC
FANN_LINEAR
When training an ANN, it is often difficult to find out how many epochs should be used for training. If too few epochs are used during training, the ANN will not be able to classify the training data. If, however, too many iterations are used, the ANN will be too specialised in the exact values of the training data, and the ANN will not be good at classifying data it has not seen during training. For this reason, it is often a good idea to have two sets of training data, one applied during the actual training and one applied to verify the quality of the ANN by testing it on data which have not been seen during the training. The fann_test_data function can be used for this purpose, along with other functions which can be used to handle and manipulate training data.
fann_test_data
Transforming a problem into a function which can easily be learned by an ANN can be a difficult task, but some general guidelines can be followed:
While training the ANN is often the big time consumer, execution can often be more time-critical –. Another method, only effective on embedded systems without a floating point processor, is to let the FANN library execute by using integers only. The FANN library has a few auxiliary functions allowing the library to be executed using only integers, and on systems which does not have a floating point processor, this can give a performance enhancement of more than 5000%.
When I first released the FANN library version 1.0 in November 2003, I did not really know what to expect, but I thought that everybody should have the option to use this new library that I had created. Much to my surprise, people actually started downloading and using the library. As months went by, more and more users started using FANN, and the library evolved from being a Linux-only library to supporting most major compilers and operating systems (including MSVC++ and Borland C++). The functionality of the library was also considerably expanded, and many of the users started contributing to the library. Soon the library had bindings for PHP, Python, Delphi, and Mathematica, and the library also became accepted in the Debian Linux distribution. My work with FANN and the users of the library take up some of my spare time, but it is a time that I gladly spend. FANN gives me an opportunity to give something back to the open source community, and it gives me a chance to help people, while doing stuff I enjoy. I cannot say that developing Open Source software is something that all software developers should do, but I will say that it has given me a great deal of satisfaction, so if you think that it might be something for you, then find an Open Source project that you would like to contribute to, and start contributing. Or even better, start your own Open Source. | https://www.codeproject.com/Articles/13091/Artificial-Neural-Networks-made-easy-with-the-FANN?fid=269333&df=90&mpp=25&sort=Position&spc=Relaxed&select=1840723&tid=1840723 | CC-MAIN-2017-34 | refinedweb | 4,828 | 58.11 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <stdio.h>
int ferror (
FILE* stream); /* file stream to check */
The function ferror tests a data stream for read or write
errors. The parameter stream is a pointer defining
the data stream. If an error has occurred, the error indicator
remains set until the file is closed or rewound.
The function is included in the library RL-FlashFS. The prototype
is defined in the file stdio.h.
fclose, feof, rewind
#include <rtl.h>
#include <stdio.h>
void tst_ferror (void) {
FILE *fin;
if (fin = fopen ("Test_ferror") != NULL) {
// Read all characters from the file
while (!feof (fin)) {
printf ("%s\n", fgetc (fin));
}
if (ferror (fin)) {
printf ("Errors occurred while reading the file!\n");
}. | http://www.keil.com/support/man/docs/rlarm/rlarm_ferror.htm | CC-MAIN-2020-05 | refinedweb | 124 | 77.84 |
Art wrote: > I have the following problem: > > ipdb> p type(self) > <class 'component.BiasComponent'> > > ipdb> isinstance(self, component.BiasComponent) > False > > I thought that isinstance(obj, type(obj)) == True. Yes, but that is not what you entered ;-). The name 'component.BiasComponent' is not bound to type(self), but to another object, probably with the same .__name__ attribute. > The specific problem is when I try to call the super of a class and it > only occurs after 'reload'ing the file in the interpreter. What am I > messing up by reloading? The relationship between namespace names and definition names. My guess is that you have two class objects with the same definition name. This happens when you reload the code for a class and have instances that keep the original class object alive. This sort of problem is why reload was *removed* from Py3. It did not work the way people wanted it to and expected it to. I recommend not to use it. > It doesn't occur if I using for the first > time in a fresh interpreter session. Starting fresh is the right thing to do. IDLE has a 'Restart Shell' command for this reason. It automatically restarts when you run a file. You have already wasted more time with reload than most people would ever save in a lifetime of using reload, even if it worked as expected. Terry Jan Reedy | https://mail.python.org/pipermail/python-list/2009-June/541557.html | CC-MAIN-2017-04 | refinedweb | 232 | 77.13 |
Hi guys I want to split the following string that are parsed from a text file using perhaps regular expression in python.
Inside the text file(filename.txt)
iPhone.Case.1.left=1099.0.2
new.phone.newwork=bla.jpg
['iPhone','Case','1','left','1099.0.2']
['new','phone','newwork','bla.jpg']
import re
pattern = '(?<!\d)[\.=]|[\.=](?!\d)'
f = open('filename.txt','rb')
for line in data_file:
str_values = re.split(pattern, line.rstrip())
print str_values
['iPhone', 'Case', '1', 'left', '1099.0.2']
['new', 'phone', 'newwork', 'bla', 'jpg']
['new','phone','newwork','bla.jpg']
If you have regular enough input data that you can always split first at the
= character, then split the first half at every
. character, I would skip regex entirely, as it is complicated and not very pretty to read.
Here's an example of doing just that:
s = 'new.phone.newwork=bla.jpg' l = str.split(s.split('=')[0], '.') + s.split('=')[1:] | https://codedump.io/share/D9qD0ObSRaeZ/1/splitting-string-using-regular-expression-in-python | CC-MAIN-2017-17 | refinedweb | 153 | 53.17 |
Expression prefix 'Resources' was not recognized by designer
Discussion in 'ASP .Net' started by martijnkaag, Dec 17, 2007.,806
- Darryl L. Pierce
- Dec 10, 2004
py2app question: Resources/Python -> Resources/lib/python2.4Russell E. Owen, Sep 8, 2006, in forum: Python
- Replies:
- 0
- Views:
- 831
- Russell E. Owen
- Sep 8, 2006
removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML
- Replies:
- 6
- Views:
- 782
- Richard Tobin
- Nov 14, 2006
Sharepoint Designer/ Expression Web Designer : adding item in toolbox ?Steve B., May 30, 2007, in forum: ASP .Net
- Replies:
- 4
- Views:
- 1,149
- Cheryl D Wise
- Jun 1, 2007
The URI prefix is not recognized.Nick C, Jun 21, 2007, in forum: ASP .Net
- Replies:
- 2
- Views:
- 4,507
- Nick C
- Jun 22, 2007 | http://www.thecodingforums.com/threads/expression-prefix-resources-was-not-recognized-by-designer.561678/ | CC-MAIN-2016-07 | refinedweb | 134 | 75.61 |
Created on 2017-06-02 08:58 by vstinner, last changed 2017-06-10 08:59 by vstinner. This issue is now closed.
0:02:58 [ 28/405/2] test_atexit failed
......
beginning 6 repetitions
123456
test_atexit leaked [12, 12, 12] references, sum=36
test_atexit leaked [4, 4, 4] memory blocks, sum=12
Code of of the test:
def test_callbacks_leak(self):
# This test shows a leak in refleak mode if atexit doesn't
# take care to free callbacks in its per-subinterpreter module
# state.
n = atexit._ncallbacks()
code = r"""if 1:
import atexit
def f():
pass
atexit.register(f)
del atexit
"""
ret = support.run_in_subinterp(code)
self.assertEqual(ret, 0)
self.assertEqual(atexit._ncallbacks(), n)
Hum, I don't understand: the test leaks references on purpose?
To reproduce the bug, use the command:
python -m test -R 3:3 -m test_callbacks_leak test_atexit ;-)
this issue can be executed on Linux, I think I am going to work on this one.
The failing unit test was added by:
commit 2d350fd8af29eada0c3f264a91df6ab4af4a05fd
Author: Antoine Pitrou <solipsis@pitrou.net>
Date: Thu Aug 1 20:56:12 2013 +0200
Issue #18619: Fix atexit leaking callbacks registered from sub-interpreters, and make it GC-aware.
Using git bisect, I found that the leak was introduced by:
commit 6b4be195cd8868b76eb6fbe166acc39beee8ce36
Author: Eric Snow <ericsnowcurrently@gmail.com>
Date: Mon May 22 21:36:03 2017 -0700
bpo-22257: Small changes for PEP 432. (#1728)
PEP 432 specifies a number of large changes to interpreter startup code, including exposing a cleaner C-API. The major changes depend on a number of smaller changes. This patch includes all those smaller changes.
To run a git bisection, start with an old commit, 1 month ago: 5d7a8d0c13737fd531b722ad76c505ef47aac96a (May, 1). Spoiler: the test doesn't leak at this bisection.
git bisect reset
git bisect start
git checkout master
make && ./python -m test -R 3:3 -m test_callbacks_leak test_atexit
# test fails
git bisect bad # bad=test fails (ref leak)
git checkout 5d7a8d0c13737fd531b722ad76c505ef47aac96a
make && ./python -m test -R 3:3 -m test_callbacks_leak test_atexit
# test pass
git bisect good # good=test pass (no leak)
make && ./python -m test -R 3:3 -m test_callbacks_leak test_atexit
# git bisect good or bad depending on the test result
# ... continue until git bisect finds the commit ...
At the end, you should get the commit 6b4be195cd8868b76eb6fbe166acc39beee8ce36.
@Eric Snow: Please don't fix the bug, please explain how to fix it ;-)
>?
SubinterpreterTest.test_subinterps of test_capi also leaks. But it is likely the same bug than this issue (SubinterpreterTest.test_callbacks_leak() of test_atexit leaks).
Oh, 1abcf6700b4da6207fe859de40c6c1bada6b4fec introduced two more reference leaks.
I have pushed a PR, if you can check it. Thanks
- SET_SYS_FROM_STRING_BORROW_INT_RESULT("warnoptions", warnoptions);
Oh, it seems like the regression was introduced by me in the commit 8fea252a507024edf00d5d98881d22dc8799a8d3, see:
Or maybe it comes from recent changes made by Eric Snow and Nick Coghlan related to Python initiallization (PEP 432). I don't know.
bpo-30536 has been marked as a duplicate of this issue: [EASY] SubinterpThreadingTests.test_threads_join_2() of test_threading leaks references.
Sorry, that issue wasn't [EASY] at all! But I tried to help Stéphane & Louie as much as I could ;-)
I created bpo-30598: Py_NewInterpreter() leaks a reference on warnoptions in _PySys_EndInit().
New changeset ab1cb80b435a34e4f908c97cd2f3a7fe8add6505 by Victor Stinner (Stéphane Wirtel) in branch 'master':
bpo-30547: Fix multiple reference leaks (#1995)
3.6 is not affected by this issue.
Thanks for taking care of this, Victor, Stéphane & Louie!
Eric Snow: "Thanks for taking care of this, Victor, Stéphane & Louie!"
You're welcome. FYI I'm trying to look more closely at the new Refleaks buildbot, so now it should become easier to catch regressions.
By the way, I like the work you did with Nick. Even if I don't understand everything, I know that we have to do it :-) | https://bugs.python.org/issue30547 | CC-MAIN-2020-50 | refinedweb | 622 | 66.64 |
This is a important concept to know and it really isn’t an intuitive one, you need to get into the guts of IIS or configure some 3rd party web based application to ever see or configure modules and handlers in IIS. When you install ASP.NET for example, the handler is installed and configured by default.
References
- Read “Developing Modules and Handlers with the .NET Framework” here
- Read “IIS Modules Overview” here
- Read “Creating a Synchronous HTTP handler” here and here
- Read about the IHttpHandler interface here
- Read about the IHttpModule interface here
- <Add my article about an asynchronous handler ASAP>
Prerequisites
- Install ASP.NET as per Lab 2 here
- Review Lab 7-3 to get a better understanding of the IIS / ASP.NET request pipeline here
- Extra credit, enable Failed Request Tracing and see how the request is managed, as per Lab 4 here
In this lab you will
- Create and configure a custom handler – How to create a custom handler using C# for IIS
- Create and configure a custom module – How to create a custom module using C# for IIS
- Learn the difference between a handler and a module
Lab 22-1
1. Create a new project in Visual Studio and name it CustomManagedHandler
2. Rename the Class1.cs file to CustomHandler by right-clicking on the file and selecting Rename from the menu
3. Add a reference to the System.Web module by right-clicking the References folder and selecting Add Reference. Add the using System.Web to the CustomHandler class file as well.
4. The CustomeHandler.cs file should look something like this, make note of the implementation of the IHttpHandler interface.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Web; namespace CustomManagedHandler { public class CustomHandler : IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { context.Response.Write("Hello from the CustomManagedHandler handler."); } } }
The ProcessRequest() method is the place where you put all the code necessary to handle a request to the file type in the given context. For example, later we will make a request to a file named default.benjamin, when we configure this handler in IIS, it will respond to any request for a file that has the .benjamin extension.
5. Compile and build the handler, a file called CustomManagedHandler.dll is created in the /bin/Release folder within the project you created in step 22-1-1.
Lab 22-2
1. Access a Windows Server running IIS and open the management console
2. Make a request to a file named default.benjamin, you will initially get a 404.0, so create one in the root directory of the Default web site C:\inetpub\wwwroot. Once added you should get a 404.3 which means there is no MIME map configured for the file extension. You can map that extension type to an existing handler, or to the custom one in which you just created in Lab 22-1.
3. Add a /bin directory to the Default web site and place the CustomManagedHandler.dll into the directory
4. Configure the handler mapping using the IIS management console, double-click the Handler Mappings feature
5. Click on the Add Managed Handler link in the Actions pane and configure the CustomManagedHandler.CustomHandler to respond to any request to a *.benjamin file.
Click on the Request Restrictions button and review the contents of the tabs:
- Mapping
- Verbs
- Access
6. Access the default.benjamin file again from the browser, if you did not install ASP.NET then you will get a 500.21.
When successful, you will see the response coded within the ProcessRequest() method.
Lab 22-3
1. Create a new project in Visual Studio and name it CustomManagedModule
2. Rename the Class1.cs file to CustomModule by right-clicking on the file and selecting Rename from the menu
3. Add a reference to the System.Web module by right-clicking the References folder and selecting Add Reference. Add the using System.Web to the CustomModule class file as well.
4. The CustomeModule.cs file should look something like this, make special attention to the implementation of the IHttpModule interface
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Web; namespace CustomManagedModule { public class CustomModule : IHttpModule { public void Init(HttpApplication app) { app.BeginRequest += new EventHandler(app_BeginRequest); } public void app_BeginRequest(object o, EventArgs e) { HttpContext context = HttpContext.Current; string requestURL = context.Request.RawUrl; if (requestURL.EndsWith(".ben")) { context.Response.Redirect("default.benjamin"); } } public void Dispose() {} } }
In the source code, right-click on the HttpApplication class then Go to Definition. Review all the EventHandlers, for example, AcquireRequestState, AuthenticateRequest, AuthorizeRequest, BeginRequest, etc… these are the events into which you can bind your module to. In this example, the module is bound to the BeginRequest event using the += operator and told to call the app_BeginRequest() method when the HttpApplication.BeginRequest() method is called. In the app_BeginRequest() method I look at the request to see if the extension matches ‘-ben’ and if so, I redirect the request to the default.benjamin file.
5. Compile and build the module, a file called CustomManagedModule.dll is created in the /bin/Release folder within the project you created in step 22-3-1.
Lab 22-4
1. Access a Windows Server running IIS and open the management console
2. Add the CustomManagedModule.dll to the /bin directory of the Default web site , also add a file named default.ben to the root of the Default web site, similar to what you did for the default.benjamin file in step 22-2-2 earlier.
3. Configure the module using the IIS management console, double-click the Modules feature
4. Click on the Add Managed Module link in the Action pane to configure the module
5. Make a request to the default.ben and notice that it is redirected to default.benjamin as expected
Extra credit
A request to default.ben is made
The request enters into the ASP.NET pipeline, the HttpApplication events start firing, specifically the BEGIN:REQUEST. The CustomManagedModule is triggered and the redirect is performed with a 302
The request is redirected to default.benjamin as per the code.
Conclusion
A handler executes based on the type of file extension and is responsible for acquiring and parameters past from the client, using those parameters and sending back an HTML response.
A module hooks itself into a request pipeline and executes code when a specific event in the pipeline is triggered. It is not responsible for managing the response. | https://blogs.msdn.microsoft.com/benjaminperkins/2016/10/26/lab-22-deploy-and-create-a-custom-module-and-handler/ | CC-MAIN-2017-43 | refinedweb | 1,099 | 51.24 |
A tool to assist with updating from PRAW<4 to PRAW4.
Project Description
Release History Download Files
topraw4 is a tool to assist with updating from PRAW<4 to PRAW4. PRAW4 contains many backwards incompatible changes – this tool attempts to identify such issues in your code, and provide you context for fixing them.
Installation
Install prawcore using pip via:
pip install topraw4
Configuration
To use this tool add the following to your script(s) prior to creating any instance of Reddit:
import topraw4 topraw4.configure()
History
0.1.0 (2016-08-24)
- Output notices pertaining to Reddit arguments.
- Support error=False argument to configure that prevents termination on correctable issues.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/topraw4/ | CC-MAIN-2018-05 | refinedweb | 131 | 57.87 |
Is there any way we can display real time data in php?
For eg, I am using one of Adafruits' python script to display date, time and CPU scaling frequency on my 2x16 LCD display. It refresh every one second.
Can I call the same script from php and display the output (and refresh every second without the whole page being refreshed) in my webpage?
Here is Adafruits' script,
- Code: Select all
#!/usr/bin/python
from Adafruit_CharLCD import Adafruit_CharLCD
from subprocess import *
from time import sleep, strftime
from datetime import datetime
lcd = Adafruit_CharLCD()
cmd = "/opt/vc/bin/vcgencmd measure_temp"('%s' % ( ipaddr ) )
sleep(1) | http://www.raspberrypi.org/phpBB3/viewtopic.php?t=26170&p=239358 | CC-MAIN-2013-48 | refinedweb | 103 | 72.16 |
GD::SVG - Seamlessly enable SVG output from scripts written using GD
# use GD; use GD::SVG; # my $img = GD::Image->new(); my $img = GD::SVG::Image->new(); # $img->png(); $img->svg();).
GD::SVG exports the same methods as GD itself, overriding those methods..
GD::SVG requires the Ronan Oger's SVG.pm module, Lincoln Stein's GD.pm module, libgd and its dependencies.
These are the primary weaknesses of GD::SVG..
You must change direct calls to the classes that GD invokes: GD::Image->new() should be changed to GD::SVG::Image->new()
See the documentation above for how to dynamically switch between packages.
As SVG documents are not inherently aware of their canvas, the flood fill methods are not currently supported.
Although setPixel() works as expected, its counterpart getPixel() is not supported. I plan to support this method in a future release.
GD::SVG works only with scripts that generate images directly in the code using the GD->new(height,width) approach. newFrom() methods are not currently supported.
Any functions passed gdTiled objects will die..
GD::SVG currently only supports the creation of image objects via its new constructor. This is in contrast to GD proper which supports the creation of images from previous images, filehandles, filenames, and data..
Once a GD::Image object is created, you can draw with it, copy it, and merge two images. When you are finished manipulating the object, you can convert it into a standard image file format to output or save to a file.
GD::SVG implements a single output method,)"
NOT IMPLEMENTED
Provided with a color index, remove it from the color table.
This returns the index of the color closest in the color table to the red green and blue components specified. This method is inherited directly from GD.
Example: $apricot = $myImage->colorClosest(255,200,180);
NOT IMPLEMENTED
NOT IMPLEMENTED
Retrieve the color index of an rgb triplet (or -1 if it has yet to be allocated).
NOT IMPLEMENTED
NOT IMPLEMENTED
Retrieve the total number of colors indexed in the image.
NOT IMPLEMENTED
Provided with a color index, return the RGB triplet. In GD::SVG, color indexes are replaced with actual RGB triplets in the form "rgb($r,$g,$b)".
Control the transparency of individual colors.
NOT IMPLEMENTED
GD implements a number of special colors that can be used to achieve special effects. They are constants defined in the GD:: namespace, but automatically exported into your namespace when the GD module is loaded. GD::SVG offers limited support for these methods..
Lines drawn with line(), rectangle(), arc(), and so forth are 1 pixel thick by default. Call setThickness() to change the line drawing width..
NOT IMPLEMENTED
The GD special color gdStyled is partially implemented in GD::SVG. Only the first color will be used to generate the dashed pattern specified in setStyle(). See setStyle() for additional information.
NOT IMPLEMENTED
NOT IMPLEMENTED
NOT IMPLEMENTED
Set the corresponding pixel to the given color. GD::SVG implements this by drawing a single dot in the specified color at that position..
NOT IMPLEMENTED
This draws a rectangle with the specified color. (x1,y1) and (x2,y2) are the upper left and lower right corners respectively. You may also draw with the special colors gdBrushed and gdStyled.);
NOT IMPLEMENTED
NOT IMPLEMENTED);
Same as the previous example, except that it draws the text rotated counter-clockwise 90 degrees..
NOT IMPLEMENTED
NOT IMPLEMENTED
NOT IMPLEMENTED
getBounds() returns the height and width of the image.
NOT IMPLEMENTED
NOT IMPLEMENTED
NOT IMPLEMENTED
NOT IMPLEMENTED
SVG is much more adept at creating polygons than GD. That said, GD does provide some rudimentary support for polygons but must be created as seperate objects point by point.
Create an empty polygon with no vertices.
$poly = new GD::SVG::Polygon;
Add point (x,y) to the polygon.
$poly->addPt(0,0); $poly->addPt(0,50); $poly->addPt(25,25););
NOT IMPLEMENTED
Return the number of vertices in the polygon.
Return a list of all the verticies in the polygon object. Each mem- ber. Returns the number of vertices affected.
Please see GD::Polyline for information on creating open polygons and splines.).
This is a tiny, almost unreadable font, 5x8 pixels wide.
This is the basic small font, "borrowed" from a well known public domain 6x12 font.
This is a bold font intermediate in size between the small and large fonts, borrowed from a public domain 7x13 font;
This is the basic large font, "borrowed" from a well known public domain 8x16 font.
This is a 9x15 bold font converted by Jan Pazdziora from a sans serif X11 font.
This returns the number of characters in the font.
print "The large font contains ",gdLargeFont->nchars," characters\n";
NOT IMPLEMENTED
This returns the ASCII value of the first character in the font
These return the width and height of the font.
($w,$h) = (gdLargeFont->width,gdLargeFont->height);
The Bio::Graphics package of the BioPerl project makes use of GD::SVG to export SVG graphics.:
I've also prepared a number of comparative images at my website (shameless plug, hehe):
The following internal methods are private and documented only for those wishing to extend the GD::SVG interface..
The _reset() method is used to restore persistent drawing settings between uses of stylized brushes. Currently, this involves
- restoring line thickness.
Todd Harris, PhD <harris@cshl.org>
Copyright 2003 by Todd Harris and the Cold Spring Harbor Laboratory
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
GD, SVG, SVG::Manual, SVG::DOM | http://search.cpan.org/~twh/GD-SVG-0.25/SVG.pm | crawl-002 | refinedweb | 926 | 57.47 |
There is lots of code that we write that follow standard patterns with some minor changes for our exact situation. Visual Studio has a nice feature called Code Snippets which provides a way to create reusable code templates for common scenarios. The idea is that you activate the snippet, then just enter the needed values.
The ones I use the most are the snippets for .NET properties. There are actually two, one called “prop” and another called “propfull”. The code snippets are available from intellisense, so if I start typing “prop” inside my class and I can see the snippet
Then I select the snippet (hit Tab once to select the item in intellisense)
Then hit Tab again to activate the snippet. This is the result
A template of code is inserted and activated. Notice the two yellow rectangles? Those are the values I need to enter. Its like a little form for my code. In the image, the cursor is on the “int” one. If I press Tab again, it will select the next rectangle. In this way I can just type a new value in the rectangle, then press Tab to move to the next rectangle and enter the value there. So I just tab from one box to the next, changing values as needed and when I’m done, I press Enter and go back to regular code entry mode.
Try this: type “prop”, <Tab>, <Tab>, “string”, <Tab>, “Name”, <Enter>
You will end up with a simple property of type string named Name.
Now try the propfull snippet.
type “propfull”, <Tab>, <Tab>, “string”, <Tab>, “_city”, <Tab>, “City”, <Enter>
The propfull code snippet creates a full Property with a field to store the property’s value. Notice that when you entered “_city” that it changed in all 3 places in the code snippet.
Using these two code snippets you can very quickly fill out a new class’s properties.
There are lots of built in code snippets for lots of different things and you can also easily create your own (like I did for Windows 8 apps). To discover what's available, you can right click anywhere in your code and select “Insert Snippet” and you’ll get access to all the snippets by category. (If you work in XAML be sure to check out propdp and propa)
For more on Code Snippets see the documentation.
(next tip: quickly adding a namespace using statement)
This post is part of a series of Visual Studio tips. The first post in the series contains the whole list. | https://blogs.msdn.microsoft.com/benwilli/2015/04/15/visual-studio-tip-4-code-snippets/ | CC-MAIN-2019-09 | refinedweb | 427 | 78.99 |
Automated Swift linting on pull requests
Deprecation Notice: This repo has been deprecated, since SwiftLint integration is now a part of Danger Swift as a first-class feature. You can read this issue for context. This repo will remain available for legacy use, but new projects should not use this repo.
Danger Swift plugin for SwiftLint. So you can get SwiftLint warnings on your pull requests!
(Note: If you're looking for the Ruby version of this Danger plugin, it has been moved here.)
Install and run Danger Swift as normal and install SwiftLint in your CI's config file. Something like:
dependencies: override: - npm install -g danger # This installs Danger - brew install danger/tap/danger-swift # This installs Danger-Swift - brew install swiftlint # This is for the Danger SwiftLint plugin.
Then use the following
Dangerfile.swift.
// Dangerfile.swift
import Danger import DangerSwiftLint // package:
SwiftLint.lint()
That will lint the created and modified files
If you want the lint result shows in diff instead of comment, you can use inline_mode option. Violations that out of the diff will show in danger's fail or warn section.
SwiftLint.lint(inline: true)
You can also specify a path to the config file using
configFileparameter and a path to the directory you want to lint using
directoryparameter. This is helpful when you want to have different config files for different directories. E.g. Harvey wants to lint test files differently than the source files, thus they have the following setup:
SwiftLint.lint(directory: "Sources", configFile: ".swiftlint.yml") SwiftLint.lint(directory: "Tests", configFile: "Tests/HarveyTests/.swiftlint.yml")
By default, only files that were added or modified are linted.
It's not possible to use nested configurations in that case, because Danger SwiftLint lints each file on it's own, and by doing that the nested configuration is disabled. If you want to learn more details about this, read the whole issue here.
However, you can use the
lintAllFilesoption to lint all the files. In that case, Danger SwiftLint doesn't lint files individually, which makes nested configuration to work. It'd be the same as you were running
swiftlinton the root folder:
SwiftLint.lint(lintAllFiles: true)
By default, Danger SwiftLint runs
swiftlintassuming it's installed globally. However, there're cases where it makes sense to use a different path. One example would be if you've installed SwiftLint using CocoaPods.
To use another binary, you can use the
swiftlintPathoption:
SwiftLint.lint(swiftlintPath: "Pods/SwiftLint/swiftlint")
If you find a bug, please open an issue! Or a pull request :wink:
No, seriously.
This is the first command line Swift I've ever written, and it's the first Danger Swift plugin anyone has ever written, so if something doesn't work, I could really use your help figuring out the problem.
A good place to start is writing a failing unit test. Then you can try to fix the bug. First, you'll need to fork the repo and clone your fork locally. Build it and run the unit tests.
git clone cd danger-swiftlint swift build swift test
Alright, verify that everything so far works before going further. To write your tests and modify the plugin files, run
swift package generate-xcodeproj. Open the generated Xcode project and enjoy the modernities of code autocomplete and inline documentation. You can even run the unit tests from Xcode (sometimes results are inconsistent with running
swift test).
One place that unit tests have a hard time covering is the integration with the
swiftlintcommand line tool. If you're changing code there, open a pull request (like this one) to test everything works.
There are tonnes of ways this plugin can be customized for individual use cases. After building the Ruby version of this plugin, I realized that it's really difficult to scale up a tool that works for everyone. So instead, I'm treating this project as a template, that you to do fork and customize however you like!
import DangerSwiftLintpackage URL to point to your fork.
Because you need to tag a new version, testing your plugin can be tricky. I've built some basic unit tests, so you should be able to use test-driven development for most of your changes.
If you think you've got a real general-purpose feature that most users of this plugin would benefit from, I would be grateful for a pull request. | https://xscode.com/ashfurrow/danger-swiftlint | CC-MAIN-2021-21 | refinedweb | 735 | 65.12 |
Ever since saving Pozyx registers to its flash memory was implemented, changing your device's ID has become possible. It's possible to do this both locally and remotely, and that's what will be discussed in this short article.
Open up the Arduino sketch in the Arduino IDE under File > Examples > Pozyx > useful > pozyx_change_network_id.
Open up the Python script in the Pozyx library's useful folder. If you downloaded your library, you can likely find this at "Downloads/Pozyx-Python-library/useful/change_network_id.py"
The code explained
Parameters
uint16_t new_id = 0x1000; // the new network id of the pozyx device, change as desired bool remote = true; // whether to use the remote device uint16_t remote_id = 0x6000; // the remote ID
new_id = 0x1000 # the new network id of the pozyx device, change as desired remote = True # whether to use the remote device remote_id = 0x6000 # the remote ID
The parameters are what you'd expect for changing the ID: the
new_id the device will take, and whether we're working with a
remote device with ID
remote_id. Change these to suit your preferences. In the parameters shown above, running the code would mean that a remote device with ID 0x6000 would have its ID changed to 0x1000.
Changing the ID
Now, let's look at the code that effectively reconfigures a device's network ID.
void setup(){ // ... // Pozyx and serial initalization Pozyx.setNetworkId(new_id, remote_id); uint8_t regs[1] = {POZYX_NETWORK_ID}; status = Pozyx.saveConfiguration(POZYX_FLASH_REGS, regs, 1, remote_id); if(status == POZYX_SUCCESS){ Serial.println("Saving to flash was successful! Resetting system..."); Pozyx.resetSystem(remote_id); }else{ Serial.println("Saving to flash was unsuccessful!"); } }
def set_new_id(pozyx, new_id, remote_id): print("Setting the Pozyx ID to 0x%0.4x" % new_id) pozyx.setNetworkId(new_id, remote_id) if pozyx.saveConfiguration(POZYX_FLASH_REGS, [POZYX_NETWORK_ID], remote_id) == POZYX_SUCCESS: print("Saving new ID successful! Resetting system...") if pozyx.resetSystem(remote_id) == POZYX_SUCCESS: print("Done")
We can see that there are three steps involved in configuring a device with a new network ID. As of writing,
setNetworkId doesn't change the network ID the Pozyx is using, and the Pozyx needs to be reset after its new ID has been saved to its flash memory. So these three steps are:
- Setting the Pozyx's new ID using
setNetworkId
- Saving the Pozyx network ID register. Note that while the ID is a
uint16_tvalue, we only use the first register address to save it. We don't need to save two registers.
- If saving was successful, we reset the device. After the reset, the device uses the new ID. All the while, textual feedback is given to the user.
Finishing up
Changing the ID does not seem to have an immediate purpose except for if you have devices with a double ID, but changing the ID can be convenient for other reasons as well to make your setup more maintainable. Identifying anchors with a prefix, for example. Naming your anchors 0xA001 up to 0xA004 or more. This is easier to read than the default device IDs. If you have anchors on different UWB channel, naming the set on channel 5 0xA501 to 0xA504 and the set on channel 2 0xA201-0xA204 will make your life easier, and so on. | https://www.pozyx.io/Documentation/Tutorials/changing_id | CC-MAIN-2017-22 | refinedweb | 527 | 64.51 |
Policies/Kdepim Coding Style
Contents
- 1 Purpose of this document
- 2 Why is coding Style useful?
- 3 What do we need?
- 4 The specification rules of coding style for kdepim and akonadi
- 5 Migration
- 6 Download Coding Style
- 7 Two scripts to check all the rules and to make the all the changes
- 8 The rules and the scripts to check and to make the changes
- 8.1 Don't test all directories
- 8.2 Indentation with four spaces, don't use any <TAB>s
- 8.3 Trim the lines
- 8.4 Only single empty lines
- 8.5 The first line and the last line(s) may not be empty
- 8.6 Only one statement per line
- 8.7 Variable declaration
- 8.8 Only one declaration per line
- 8.9 Use one space after each keyword, but not after a cast
- 8.10 Use a space after the name of the class
- 8.11 #include directive
- 8.12 Place * and & next to the variable
- 8.13 Use namespace foo { in the same line
- 8.14 Use struct foo with { at the next line
- 8.15 Each member initialization of a method in separate line
- 8.16 Surround all operators with spaces
- 8.17 switch rules
- 8.18 try-catch rules
- 8.19 if, else, for, while (and similar macros) rules
- 8.20 typedef struct statement over more lines
- 8.21 Don't use & without a variable
- 8.22 Don't use untyped enum
- 8.23 Don't use enum with empty member
- 8.24 No ; after some macros
- 8.25 No "one line" if, else, for or while statement
- 8.26 No space between some keywords
- 8.27 No space around the index of an array
- 8.28 No space around an expression surrounded with braces
- 8.29 No space before : in a case statement
- 8.30 No space before ; at the end of statement
- 8.31 No ); alone in a line
- 9 Use all the scripts
- 10 Check the objects and the libs
- 11 Check the assembler files
- 12 The results of the migration
Purpose of this document
This.
But we have some more rules for some more situations.
Why is coding Style useful?.
What do we need?
We need at least:
- a specification (a set of rules) for the coding style of the sources
- some tools to check the sources against the specification
- some tools to change the sources
astyle is a very.
The specification rules of coding style for kdepim and akonadi
These are the sub-sections under The rules and the scripts ...
Migration.
Download Coding Style
You can download the software with test files and install instructions.
Download Coding Style: Media:CodingStyle.tar.gz
Two scripts to check all the rules and to make the all the changes:
- All-Check.sh
- Change-All.sh
For each specification rule, the name of the scripts to check and apply the changes are given at the beginning of the section.
The rules and the scripts to check and to make the changes.
Don't test all directories
If a .no_coding_style file is present on a directory, the test will not be done.
If a .no_recursion file is present on a directory, we do not explore the subdirectory(ies)
Indentation with four spaces, don't use any <TAB>s
- Tabs-check.sh
- Tabs.awk
- The changes are well done with
astyle --indent=spaces=4
Trim the lines
- Trim-check.sh
- Trim.awk
- The changes are well done with:
astyle --indent=spaces
Only single empty lines
Refer to
- Twice-check.sh
- Twice-change.sh
- Twice.awk
The first line and the last line(s) may not be empty
Some of the sources have empty lines at the beginning of the file. Some have one or more empty last line(s).
- First-check.sh
- First-change.sh
- First.awk
Only one statement per line
We don't provide (yet) any check for this rule.
Variable declaration
We follow the kdelibs rule: [[1]] We don't provide (yet) any check for this rule.
Only one declaration per line
We follow the kdelibs rule: [[2]] We don't provide (yet) any check for this rule.
Use one space after each keyword, but not after a cast foreach sizeof new Q_FOREACH FOREACH do try enum union Q_FOREVER bool char char16_t char32_t double float int long wchar_t signed unsigned short
- SpaceAfter-check.sh
- SpaceAfter-change.sh
- SpaceAfter.awk
Use a space after the name of the class
We prefer having a space before the keyword public at the definition of a new class:
class DbException : public Akonadi::Exception { ... };
- Public-check.sh
- Public-change.sh
- Public.awk
#include directive
Refer to
We prefer no space at the beginning of the directive. Some (not many) files need to be corrected to unify to all the other files.
// some files use this # include <A/b> // we prefer to unify the coding style #include <A/b>
- Space-Include-check.sh
- SpaceInclude.awk
Place * and & next to the variable:
- &,
- & >
- * >
- ( ) and ( ) empty function call
- enum { untyped enum
Not all the ouputs are real errors. Some codings might be correct.
- NoSpace-check.sh
- NoSpace.awk
- using astyle to make the changes:
astyle --reference=name --align-pointer=name
Some lines with must be manually corrected.
Use namespace foo { in the same line
We prefer having all in one line:
namespace foo { ... }
- Namespace-check.sh
- Namespace.awk
- astyle to make the changes.
Use struct foo with { at the next line
We prefer having the same coding style for a class and a struct
struct foo { ... }
- Struct-check.sh
- Struct-change.sh
- Struct.awk
NOTE: The script must be use after astyle.
Each member initialization of a method in separate line
This example shows the indentation we prefer. Notice that colon sign and comma(s) are at the beginning of each initialization line(s):
class myClass { // some lines public: myClass(int r, int b, int i, int j) : r(0) , b(i) , i(5) , j(13) { // more lines }
- Default-check.sh
- Default-change.sh
- Default.awk
Surround all operators with spaces
This is well done with astyle:
astyle --pad-oper
switch rules
This example shows the indentation we prefer:
switch (a) { case one: // some lines break; case two: { // some lines break; } case three: { // some lines return; } default: // some lines break; }
- Switch-check.sh
- Switch.awk
- astyle makes the changes
NOTE: By using a new block, we prefer having break; and return; within the new block.
try-catch rules
This example shows the indentation we prefer:
try { // some lines } catch (...) { }
- TryCatch-check.sh
- TryCatch.awk
if, else, for, while (and similar macros) rules
Even for blocks with only one statement, we prefer to use braces such as:
if (condition) { statement; }
This should be used with the keywords if, else, for, while and similar macros.
- If-check.sh
- Else-check.sh
- For-check.sh
- While-check.sh
- If.awk
- Else.awk
- For.awk
- While.awk
- astyle makes the changes.
But we get some false alarm with statements that extend over more than one line:
if (condition_1 && condition_2) { statement; }
typedef struct statement over more lines
This example shows the indentation we prefer:
typedef struct foo { // some lines }
- TypedefStruct-check.sh
- TypeStruct.awk
Don't use & without a variable
It is more readable to have the name of (all) the variable(s) in the first line of a method.
The chnages must be done manually.
Don't use untyped enum
Instead of having an untyped enum such as:
enum { aElement= 123 }
we prefer a #define directive:
#define aElement 123
Don't use enum with empty member
The most compilers do not complain such a code:
enum mytype { aElement, bElement, }
The last element is empty. We prefer a "pedantic" code such as:
enum mytype { aElement, bElement }
- EnumPedantic-check.sh
- EnumPedantic.awk
No ; after some macros
- Pedantic-check.sh
- Pedantic.awk
No "one line" if, else, for or while statement.
- OneLine-If.sh
- OneLine-Else.sh
- If.awk
- Else.awk
No space between some keywords
We don't want to have a space:
- between & and >
- between * and >
- between ( and ), an empty parameter list.
- NoSpace-check.sh
- NoSpace.awk
No space around the index of an array
We don't want to have spaces around the index of an array element.
- coding-style-check-No-Space.sh
The output of the check script is:
check the file NO-space-example.cpp 15: [<Space> found. Check it. a = b[ i ]; 15: <Space>] found. Check it. a = b[ i ];
No space around an expression surrounded with braces
We prefer function definition and function call with no space after the opening brace and before the closing brace.
- coding-style-check-Parenthesis.sh
- This is well done with astyle:
astyle --unpad-paren
Note that astyle makes also changes within the macros SIGNAL and SLOT, which aren't desired. This can be corrected with a Qt-utility qt5/qtrepotools/util/normalize/normalize:
normalize --modify filename
No space before : in a case statement
We don't provide (yet) any check for this rule.
No space before ; at the end of statement
We don't provide (yet) any check for this rule.
No ); alone in a line
This is sometime to be find with a function call with many arguments, listed on many lines.
- coding-style-check-Parenthesis-alone.sh
Use all the scripts
All the scripts can be used with one only script.
Check the objects and the libs:
- save
- test
- clean
Check the assembler files.
Generate the assembler files.
Remove the debug information:
- save
- test
- clean
The results of the migration
The results can be seen here. | https://techbase.kde.org/index.php?title=Policies/Kdepim_Coding_Style&oldid=84011 | CC-MAIN-2019-51 | refinedweb | 1,589 | 73.27 |
Until now, you have learned about class and objects. In this chapter, you will expand your knowledge to an advanced level.
In this chapter, we are going to make subclasses of a class. A subclass is also called a derived class and the class from which it is derived (parent class) is called superclass or base class.
We also know that a subclass can use members of its parent class.
In C#, we use
: to make a subclass.
In this example, ChildClass is derived from a class ParentClass.
Let's take an example.
using System; class Student //base class { public string name; public string GetName() { return this.name; } public void SetName(string name) { this.name = name; } public void PrintAbout() { Console.WriteLine("I am a student"); } } class Undergraduate: Student { public void UndergradPrint() { Console.WriteLine("I am an Undergraduate"); } } class Test { static void Main(string[] args) { Student s = new Student(); Undergraduate u = new Undergraduate(); s.PrintAbout(); u.PrintAbout(); u.UndergradPrint(); } }
class Undergraduate: Student → Undergraduate is the name of a class which is a subclass or derived class of the Student class.
As stated above, a child class can access all members of its parent class (if they are not private), you can see that the object of the Undergraduate class is accessing the method of its parent class -
u.PrintAbout().
However, a parent class can't access members of its child class. For example,
s.UndergradPrint() will give an error.
Before going further, let's learn more about the protected modifier.
C# Protected
Any protected member of a class (variable or function) can be accessed within that class or its subclass. It cannot be accessed outside that.
Let's take an example.
using System; class Student //base class { protected string name; public void SetName(string name) { this.name = name; } } class Undergraduate: Student { public void PrintName() { Console.WriteLine(name); } } class Test { static void Main(string[] args) { Undergraduate u = new Undergraduate(); u.SetName("xyz"); u.PrintName(); } }
In the Student class, we made the variable name protected. So, it can be accessed directly within its subclass Undergraduate. And we did the same. We accessed the variable name directly in the method PrintName of its subclass.
We first created an object u of the subclass Undergraduate. Since an object of a subclass can access any of the members of its parent class, so u called the method SetName of its parent class with a string parameter "xyz". This string got assigned to the variable name thus making the value of name as "xyz" for the object u.
Then, u called the function PrintName which printed the value of name i.e., xyz.
C# Constructor of Subclass
We can have constructors for both base and derived class to initialize their respective members. The constructor of the derived class can call the constructor of the base class, but the inverse is not true. Let's see how.
Calling Base Class Constructor Having No Parameter
If the base class constructor has no parameter, then it will be automatically whenever the derived class constructor will be called, even if we do not explicitly call it.
Look at the following to understand it.
using System; class A { public A() { Console.WriteLine("Constructor of A"); } } class B: A { public B() { Console.WriteLine("Constructor of B"); } } class Test { static void Main(string[] args) { B b = new B(); } }
While calling the constructor of any class, the compiler first automatically calls the constructor of its parent class. This is the reason that while calling the constructor of class B, the compiler first called the constructor of its parent class A and then the constructor of B. Thus when the constructor of B was called, the compiler first called the constructor of A thus printing "Constructor of A" and after that "Constructor of B".
Let's see another example where the constructor of the parent class gets automatically called first.
using System; class A { public A() { Console.WriteLine("Constructor of A"); } } class B: A { public B() { Console.WriteLine("Constructor of B"); } } class C: B { public C() { Console.WriteLine("Constructor of C"); } } class Test { static void Main(string[] args) { Console.WriteLine("Creating object of A :"); A a = new A(); Console.WriteLine("Creating object of B :"); B b = new B(); Console.WriteLine("Creating object of C :"); C c = new C(); } }
Here, when the object of A was created, its constructor was called, printing "Constructor of A".
When the object of B was created, the compiler first called the constructor of its parent class A, printing "Constructor of A" and after that printing "Constructor of B".
Similarly, when the constructor of C was called, first the constructor of its parent class B was called. On calling the constructor of B, the constructor of A got called, printing "Constructor of A" followed by "Constructor of B". At last, "Constructor of C" got printed.
Calling Parameterized Base Class Constructor
Unlike parent class constructors having no parameter, parameterized parent class constructors are not called automatically while calling its child class constructor.
To call a parent class constructor having some parameter form the constructor of its subclass, we have to use the
base keyword.
base keyword can be used to access any member of base class. Let's take an example on this first.
using System; class A { public void Method() { Console.WriteLine("Inside A"); } } class B: A { public B() { base.Method(); } } class Test { static void Main(string[] args) { B b = new B(); } }
In this example, we accessed the method of parent class using the
base keyword -
base.Method().
We can also access the constructor of a base class using the
base keyword. Suppose we need to pass a variable x to the constructor of class A from class B. We can do so by writing
public B(): base(x). Let's take an example.
using System; class A { public A(int l) { Console.WriteLine($"Length : {l}"); } } class B: A { public B() : base(10) { Console.WriteLine("This is constructor of B"); } } class Test { static void Main(string[] args) { B b = new B(); } }
In this example, we passed 10 to the constructor of A (parent class) from the class B (child class).
Let's take one more example.
using System; class Rectangle { public int length; public int breadth; public Rectangle(int l, int b) { length = l; breadth = b; } public int GetArea() { return length*breadth; } public int GetPerimeter() { return 2*(length+breadth); } } class Square: Rectangle { public Square(int a): base(a, a) { } } class Test { static void Main(string[] args) { Square s = new Square(2); int area, p; area = s.GetArea(); p = s.GetPerimeter(); Console.WriteLine($"Area : {area}"); Console.WriteLine($"Perimeter : {p}"); } }
We know that a square is also a rectangle with the same length and breath. This is what we did in the constructor of Square.
We created an object s of class Square and passed 2 at the time of creating it. So, this 2 will be passed to the constructor of class Square. Hence, the value of a will be 2.
In the constructor of Square, constructor of its superclass Rectangle is being called with the value of a as 2, thus making the value of both its length and breadth equal to a i.e. 2.
Finally, in the Main method, we used the object s of the class Square to call two functions of its parent class Rectangle.
C# sealed
In C#,
sealed keyword is used to restrict a class from being derived. It means that if we use
sealed with a class, we won't be able to create any subclass of it.
We can also use
sealed keyword with methods to prevent them for being overridden which we will see in next chapters.
Let's look at an example of using
sealed with a class.
using System; sealed class A { } class B: A { } class Test { static void Main(string[] args) { } }
In this example, we tried to make a subclass of sealed class A and that's why we got the error. | https://www.codesdope.com/course/c-sharp-inheritance/ | CC-MAIN-2022-40 | refinedweb | 1,322 | 65.12 |
Listen as Joe Duffy leads a talk about Cloud Engineering with special guests Ken Exner and Luke Hoban. Joe discusses how he thinks of the cloud as the a giant super-computer and dives into each component of the cloud operating system. Then Luke will talk about how Pulumi makes authoring cloud components easy and gives a few examples of Pulumi’s Multi-Language components. Finally, Joe and Ken discuss how customers are currently using the cloud and how they both envision the future.
Presenters
- Joe DuffyCEO & Founder, Pulumi
- Luke HobanCTO, Pulumi
- Ken ExnerGeneral Manager Developer Tools, AWS
Hi, everyone. My name’s Joe Duffy, founder and CEO of Pulumi. I’m here today to talk to you about something we’re really excited about that we’re calling developer-first infrastructure. And to get started, to set a little context, I wanted to start by talking about the Modern Cloud. It’s no surprise to anyone here that the Modern Cloud is very complicated, but with that complexity comes a lot of exciting benefits.
But you know, we’re shipping code faster than ever before today. We’ve got more moving pieces than ever before. We’re really talking about distributed architectures, and that’s actually one of the themes of the talk today that I’m really excited about, and why bringing the cloud closer to developers is so exciting. But the fact is, between AWS, Azure, Google Cloud, there’s hundreds of building block services in each one of those that can be stitched together in infinite ways to build powerful software. There’s vendors like Cloudflare and Snowflake, and many, many others that are introducing their own cloud services that are exciting as well.
And then we’ve got the whole Cloud-Native ecosystem, with a new project seemingly launched every week. It used to be that there was a new JavaScript framework every week, and now we have a new Cloud-Native tool every week to stay on top of. It’s a lot of moving pieces, a lot of complexity in how we’re building and shipping software these days. And if you’re a developer, your background’s being a software developer like mine is, that infrastructure might seem boring and tedious. In fact, there is sort of a meme that infrastructure should be boring.
But should it really? Is it actually tedious? Does it have to be? I think a different perspective is that cloud infrastructure is really the essential building block of our modern application architectures. That sure, maybe setting up a network or a Kubernetes cluster, that should be, quote, boring and left to the experts who are the infrastructure experts who are gonna go make sure that’s secure and reliable and cost-effective. But what about a serverless function? Is that in the infrastructure domain or is that in the application domain? I’m gonna argue today that it’s a little bit of both, and that developers really should care about some of these things. You know, Pub/Sub topic, a queue. A lot of these are essential building blocks of building distributed applications, and that’s really exciting.
And that’s something that’s changed over the last 10 years. I would say 10 years ago, the conversation was more about, “Hey, let’s provision three virtual machines in a database. " And sure, infrastructure was relatively boring back then, but the world of Modern Cloud has really changed all of that. The way I think about it, and the context for today’s talk is what if we re-imagined the cloud as effectively, a giant supercomputer that’s planet-scale, infinitely scalable, that’s compute and storage available to us on demand as our application architecture evolves, and as our application becomes wildly successful and we go from tens of users to thousands, to millions of users. You think of what it takes to run Lyft, for example.
A lot of these modern companies that were fortunate enough to be born in the cloud are actually using the cloud, using this giant supercomputer to fuel their business and really fuel the innovation. And as a developer, the same way I’m writing software that runs on an Intel PC back 15 years ago, now I’m writing code that runs on this huge computer. And arguably the cloud has become a bit of the operating system because there’s a software component sitting on top of the supercomputer. And as a developer, this is really exciting. This goes from infrastructure being this boring thing that’s an afterthought to really being part of an application’s architecture.
I think of, for many decades, we’ve been talking about the age of distributed computing. In fact, in the ’50s and ’60s, there were tons of papers and interesting research done around communicating sequential processes, which, by the way, is the foundation for Go’s concurrency model. A lot of these pieces have come together finally. I think a lot of folks predicted it would happen sooner when we went through the whole multi-core transition, which led to concurrency and async programming. But now what we’re seeing is, thanks to the cloud, the age of distributed computing has actually arrived, and that’s a very exciting thing.
The challenge is, how do we tap into all of that capability? It’s not always easy. The cloud is complicated. You’re wading in tons of YAML, and there’s sort of a missing programming model. And so to take us into the solution to that problem, I’d like to demonstrate an analogy in three parts. First of all, several decades ago, we went from writing assembly language to higher-level programming languages, like C, Fortran, COBOL, et cetera.
Obviously we’ve come a long way since then, but this was a huge revolutionary change in enabling people to focus on building better software, being more productive. It almost sounds funny in hindsight now, many decades later, but actually writing in C, it was way more productive than writing in assembly language. Fewer bugs, you just get more done, 10 X the productivity, right? And that was a huge change in our ability to build this software ecosystem. Arguably Microsoft wouldn’t exist. Amazon wouldn’t exist.
None of the current industry leaders would exist if it weren’t for that innovation. And then we didn’t stop there, right? We kept going. So we went from writing C, which is a relatively low-level language. You know, C’s goal was to expose the underlying capability of the hardware directly to the programmer. And for good reason.
C was used to write operating systems and the run times themselves that I’m about to talk about. But that wasn’t good enough to fuel sort of the next several decades of innovation. We had to increase the level of abstraction even further. And so we came up with things like Java and JavaScript, with Node. js eventually coming on the scene, and Python, Go, .
NET. There’s a lot of things I could have put in this category, but here are some of the things that come to mind when you think about higher-level programming languages. Some higher-level than others, some dynamically type checked, some statically typed checked. But the key here is we kept increasing the level of abstraction so that developers could focus on business logic, could focus on what matters for the problem they’re trying to solve and not swizzling bits and bytes like in C, for example. Of course, plenty of people still write code in C.
Envoy is written in C++ because the developers there really wanted tight control over performance. And the key here is we’ve got a lot of different tools in the tool belt that we can choose from. One key part of this that I think I wanna highlight, that I hope folks keep in mind as we go through the rest of this talk, is some of these are multi-operating system technologies. Java, although yes, it works on Windows, is not Windows-specific. You can run Java on the server.
In fact, that’s one of the reasons it became very popular in the early days was, you could write servlets in Java, and you could run it on the server. This was around when the web was emerging, right? And Python, you can run Python anywhere. And so that leads to the next analogy, and the final part in this three part series is, the cloud really has become the new operating system, and there’s a software fabric that sits on top of the hardware that we’re now targeting with our software. So that we don’t have to be always experts in the underlying bits and bytes of the cloud. And this architecture diagram on the left is Windows NT.
On the right, we’ve got Kubernetes, which arguably has become the standard scheduler for workloads in the cloud. Obviously it doesn’t cover everything that you do in the cloud, and so the analogy is a loose one. We don’t really have that one diagram that works anywhere, but we’ve got lots of different operating systems, too. We’ve got Windows, Mac OSX, Linux, et cetera. And so I think the reason for highlighting this, is just a mindset shift.
It’s a shift in thinking about the cloud as just a container of compute that runs my code, to an entire ecosystem of building blocks that I can use to build powerful software. And that mindset is why we founded Pulumi, frankly. But it is really exciting. And that’s the prelude for the rest of the talk. infrastructure as code is what we ended up building with Pulumi, but unfortunately infrastructure as code today is largely YAML domain-specific languages.
The analogy here is, think back to the assembly language to C transition that I highlighted earlier. A lot of the configuration management techniques that we use today treats the infrastructure as though it’s an afterthought. It doesn’t embrace sort of this new worldview that we just covered. So even though it’s called infrastructure as code, it’s typically not, it’s missing a lot of the capabilities that we think of when we use the term code. It’s text, so we can version it, we can check it in.
It’s repeatable so we can effectively execute it, which leads to sort of a code analogy. But it’s definitely a loose analogy to say that it’s code. What we’re seeing now is this transition to infrastructure as software which is, think of all these cloud services that we’re configuring using infrastructure as code, as composable building blocks. Really take that three-part analogy that I was just covering and take that to the next level. I think I mentioned we at Pulumi built an infrastructure as code technology.
I’ll be honest, we didn’t know that’s what we were gonna build at the outset. We had that vision of, hey, the cloud is the next operating system and is really this powerful capability. infrastructure as code is what gives you this programmable abstraction and a consistent resource model on top of the cloud service model so that you can then compose all of these things and use programming languages that we know and love to build cloud software in a more native way and bring the cloud infrastructure closer to the application architecture. And so when I say infrastructure as software, what do I mean? Well, software engineering, if you’re a developer, means a lot of different things. It means typically you’ve got facilities for abstraction.
We’ve got higher-level languages that give us great productivity and expressiveness. We’ve got type checking to find errors much sooner in that process. We’ve got great IDEs, editors that have syntax-highlighting colorization. Red squiggles if I make a typo. Right click to refactor, all of these things just built-in so that the code is just there and I can get into the flow and really, really get my job done.
There’s unit testing and integration testing, various ways to make sure the code is correct before we go deploy it or execute it in production. Debugging so I can interactively find errors, CI/CD, all these things that we know and love about software engineering, historically have not applied to infrastructure as code. And we think that’s a real shame. And that’s why we’re excited to talk about Pulumi today. There are other folks who are doing great work here as well.
I wanna call out the AWS CDK team, for example, for also seeing this exciting future. But really, really excited about this next level of innovation and really up-leveling our entire game when it comes to infrastructure. And to show that in action, I’m gonna invite Luke Hoban, our CTO of Pulumi, to program the cloud.
Hi, my name is Luke Hoban. I’m the CTO of Pulumi. And I’m gonna today talk about how we can take some of the developer-first infrastructure principles that Joe talked about and apply them using a tool like Pulumi. So today I’m gonna focus on using Pulumi, but really most of what I’m talking about here will apply for any modern infrastructure as code tool that’s taking a developer-first approach to how we think about managing and working with our infrastructure. I’m gonna start where you’d kind of expect for a developer-first experience, which is inside my IDE. And so I’m here inside Visual Studio Code. I’ve got some TypeScript up on my IDE, and I’m just gonna start writing and programming some infrastructure, and programming the cloud from scratch.
I’m gonna start with a really simple use case. And as we go through this demo, we’ll bring in more complex building blocks to show you how this looks as you get to larger and more realistic applications. But to begin with, let’s just start with something really simple. Let’s create an AWS S3 bucket. Now, the first thing we see as we start typing is that we’re getting that same developer experience that we’re used to for an application developer, the rich IntelliSense, the completion lists, the squiggles, all those sorts of very quick feedback, the ability to discover and understand what’s available.
For example, when I type AWS dot, I can see all of the different namespaces that are available for all the various features that are available inside AWS. There are many, many hundreds of different APIs available from the AWS platform, and all of them are available here within Pulumi. I can then go ahead and say I wanna create a bucket. Just give it a name. And now, just so I have something to work with, I’m gonna export out the name of the bucket.
So I’ll export, bucket name is equal to bucket dot ID. Okay, so I’ve written my first program. Let’s go and deploy this and create some infrastructure. I’m gonna come down here, I’m just gonna type Pulumi up to update the cloud with this infrastructure. Now, one of the really important things with these modern infrastructure as code tools like Pulumi, is that even though they’re using a programming language like TypeScript in this example, they really are still Desired State Configuration.
So when I say Pulumi up, the first thing it does is shows me a preview. And that preview tells me what changes are gonna be made when I try and deploy this program to the cloud. Now in this case, because I have nothing deployed so far, it knows it’s just gonna need to create that bucket that I specified. I can see the details, understand exactly what’s gonna get created, and go ahead and say yes to actually deploy that into the cloud. Okay, I’ve created my bucket.
Now, I might wanna create additional objects. So one thing I could do is create an object, and I’ll call it obj. And this time, instead of a bucket, I’ll grab a bucket object. I’ll call it obj again. And this time I need to specify some properties to indicate the details of what this thing should be.
And so for example, I’m gonna say I want this object to live inside that bucket. I’m gonna say I want the content to be hello world. Okay, so I’ve specified an additional resource. And now when I type Pulumi up, we see that there’s two unchanged. So that bucket that I already deployed doesn’t need to be modified.
But this bucket object does need to be created. So I’ll go ahead and say yes to create that. Here we go. I created my second piece of infrastructure. Now I can say something like AWS S3 LS and I can grab the name of this Pulumi stack output bucket name.
And we can see that we have a single object named obj, 13 bytes for hello world inside my bucket. Okay, so we’ve got some infrastructure. We’re building up something inside this bucket. Now so far, we’ve just done things that you’d kind of expect from any infrastructure as code tool. Whether it’s CloudFormation or Azure Resource Manager or Kubernetes YAML, anything like that.
So what we wanna do because we have software is really take advantage of some of the unique benefits that sort of a programming environment, a software engineering environment can enable for us. So one of the really simple ones is just, hey, I can write a for loop. So maybe I can say const file name of, and now I’ll use some built-in libraries. So I’ll read a directory off of disk to find all the different files that are available inside this folder. And for each one of those files, I’ll go ahead and create an object.
So in this case, it’s gonna be named file name, and the content inside that folder is gonna be F-S dot read file sync. And then path dot join dot slash files with my file name. Okay, so there we go. I’ve written some code. This is using a for loop.
It’s using some libraries that are available for me, like readdirSync and readFileSync, to interact with my environment and use the libraries inside my new JS environment. I’m taking advantage of this being a programming language and having that flexibility and infrastructure to work with. I can also see one additional thing, which is that I get an error here. So I get a squiggle telling me, giving me the feedback right away that I have a problem with my code. And if I look at this, it’s saying that buffer is not assignable to type string.
That’s ‘cause I have a slight issue here where I actually need to specify what content-encoding that file uses. So I’ll go ahead and fix that. Now I’ll type Pulumi up to deploy this. We’ll see that I actually have this file called index dot HTML inside files. And so it’s gonna deploy that file instead of the original object with hello world in it.
Now we see we’re gonna create this resource and delete this resource. I’ll go ahead and say yes. Now one thing I’m just gonna do here, instead of continually running Pulumi up and checking the preview and doing the updates, that’s really useful when I wanna make sure I know exactly what change I’m making to the cloud, but while I’m in this rapid development mode like I’m showing you right now, it’s really important that I be able to quickly make changes, see them and react to them as I do. So one of the things we can do to enable that is type Pulumi watch, and this is just gonna watch every time I save, we’re gonna see an update deployed to the cloud. So let’s see what happens now.
There’s a couple of additional things I wanna do to this. I wanna modify my bucket to make it be able to host a website. In this case, I have an index document which I just uploaded called index dot HTML. So I’ll just save that. I go ahead and click save on my file, and that starts an update down here.
And that’ll take just a couple of seconds to deploy up to the cloud. But there’s one additional change I need to make, which is I need to make the ACL public read so this can be read from the internet. And I need to indicate that the content type is text HTML so it’ll be rendered correctly by the browser. Okay, so now we hit save again. That’s gonna deploy that.
I can come over here. Oh, and there’s actually one last thing I need to do. Export const URL. Get the website endpoint. Okay, so there we go. We’ve written some code which does a little static website hosting. We’re updating that right now. In just a second, this should be available. There we go. There we go. So now we’ve got our hello Cloud Engineering Summit. Now of course, if you wanna modify this, I’ll just remove some of those excitement from this. Hit save. That’s gonna deploy. And here we go.
Now, when we hit that curl endpoint object in that bucket, it’s changed to be this one. So then we have all those developer productivity benefits of being inside an IDE. Quick feedback loops, error messages, completion lists, the ability to discover APIs, and the ability to quickly iterate on my infrastructure. One of the things I wanna do, because I have a programming language, is not have to rewrite this code every time. Somebody else has probably had to sort of invent this idea of creating this for loop and reading files from a disc and creating objects from them.
So maybe I wanna give that thing a name, turn it into a reusable piece of infrastructure instead of copy-pasting it around, actually share it in some useful way. So the first step there could be something like, I wanna say syncDir, and I wanna take a bucket. And I wanna take a dir. Okay, so now we’ve made this a function and we just need to generalize it a little bit by not hard coding this folder. And now we just need to call it.
So we say syncDir. Pass the bucket in and pass dot slash files. So in our particular case, we’re gonna call this function, but in general, we’re gonna use this thing here. So hit save on that. One of the things you’ll notice, it’s gonna start doing an update, but there’s actually not gonna be any change it needs to make, because I just did a refactoring of my code here.
This is another important thing, I can refactor my code. I can be confident those changes are not gonna make any changes to my cloud environment because I can do that preview. And now I can have it be a separate piece of functionality. There we go, I’ve abstracted out that logic, given it a name, given it an API, all that sort of thing. And I could go further.
I could move this into its own file. So for example, I actually have a file here called sync, which has a sync folder API. So I’m gonna just rename this to syncFolder. That’s now a reusable piece of infrastructure that I’ve factored out into its own file. It’s documented.
If I come back over here I can hover over this and see what the description of that API is. And so I’ve created a reusable piece of code. And this is really what programming languages and software engineering are so good at, is the ability to create abstractions and reuse them. There we go. Now I’ve got, in a very simple form, static website hosting with just creating a bucket and then syncing the folder of files to that bucket.
Okay, so we’ve started with some very simple bottoms up examples here. Could start from just the raw building blocks of AWS. But one of the things we’ve learned is about this ability to create abstractions. And we really wanna make that something that’s available to as many users as possible. And so earlier this week, we actually launched something called the Pulumi Registry.
And the Pulumi Registry is a place where you can go see all the various things that are available for you from within Pulumi. And so your Pulumi programs have access to all of these different packages. And there’s 78 of them right now, but quickly growing over the next few weeks and months. There’s packages for all the things you expect, like AWS, Azure, GCP, and Kubernetes. We looked at AWS in the last example, but for example, I can come over into Azure.
I can see an overview of the package and how I use it, information about how to install and configure. And then most importantly, as a developer, I can go and access the API docs. So I can find things I might be interested in, like maybe I wanna know how to work with virtual machines in Azure compute. I can click on this, come into the API docs and see that there are dozens of examples that I can work with, in a whole bunch of different languages, like TypeScript, Python, Go, and C#, that I can use as a starting point to start working with these raw building blocks of the cloud. But there’s not just these core cloud building blocks.
There’s also two additional things. One is, there’s dozens and dozens of long-tail cloud and SaaS providers that I can work with. If I wanna work with Akamai or Alibaba or Auth0, I can work with those inside Pulumi. And I can get the documentation from the Pulumi Registry. But then, just like the syncFolder API that we looked at, we also have a bunch of components.
And these aren’t things published by cloud providers themselves. These are packages of functionality that’s built on top of what the cloud providers offer to make it easier to work with certain parts of the cloud. There’s things like EKS and API Gateway. Things like VPC and Azure Quickstart for container registry, geo-replicated. And finally there’s things like the ability to deploy some very common and useful helm charts like CoreDNS.
The one I wanna dive into is Amazon EKS. So with the EKS package, it takes all of the complexity of standing up a EKS cluster in AWS and turns it into, with smart defaults, best practices built-in, a single line of code that does all the right things by default, but then offers a bunch of configuration, that we can go look at in the API, for all the additional things you might wanna do on top of that. So if you wanna create a new IDC provider, if you wanna set the desired capacity, all of these options are available, but there’s smart defaults and best practices defaults built-in to the API. Let’s take a look at what it looks like to use a component like this Amazon EKS component from within Pulumi. So this case, we looked at TypeScript in the previous example, let’s show that we can do this with Python as well.
And so here we just took that little piece of example code that was in the registry, we brought it over into our program here. And this is just a normal Pulumi Python program. When I do Pulumi up in this context, we’ll see that something’s quite different. Even though I only wrote one line of code here to create an EKS cluster. We’ll see the Pulumi is, when I try to deploy this.
is gonna deploy quite a few resources. So it’s gonna deploy 28 resources into the cloud, and that’s the component I specified. But then we see all these different children of that component. And that includes the EKS cluster itself inside AWS. But it also includes some networking capabilities, some IM capabilities, and even some resources inside the Kubernetes cluster itself.
So these are resources, not within AWS, but actually within that Kubernetes cluster, they wanna create a config map. And so this shows how we can mix and match those cloud providers to do that. So this is really the power of abstractions, is all of this logic to build up and connect all these different building block pieces to create that best practices EKS cluster. All of those are built-in to this component, which was built once, shared in the registry, and now anyone can come and use it and automatically benefit from all this without having to just copy-paste that over and take ownership of it. Now, I wanna actually also show, in the registry there’s a link to the source code.
So we can come over here, see that source code in GitHub, but actually have that downloaded locally as well. And there’s two things I wanna highlight related to the source code for that package. The first is that the EKS package is actually written in TypeScript. And so here’s an implementation of that cluster class that we just used to create the EKS cluster, and all the outputs that are specified in the documentation for that. Now you might wonder, we used that component from Python in the last demo, but here we’re actually showing that it’s implemented in TypeScript.
And this is because Pulumi has made available the ability to create components in one language and use them from other Pulumi languages. That means that this ecosystem of components can be shared across the various different language ecosystems that are working with modern infrastructure as code. The second thing to note is that once we define a component and give it a nice interface and API and documentation of what its contracts are for the behavior of the various different interfaces that it exposes, we can then write tests that test the behavior of that interface and of that component. And so, for example, here are some of the tests that we have for that EKS component. Some basic tests that when I stand up a cluster, the kubeconfig that’s generated is what I expect.
And some more complex tests that configure the service IP range and verify that the output is the expected values from the actual cluster provisioned by AWS. These sort of tests allow us to really enforce and make sure we’ve created the right contracts for our components, and that those are robustly tested on every commit that’s merged into these components. That means that as a consumer, I can be confident that this piece of logic behaves as it’s designed. And that means that I don’t have to worry about all those details of the internals of that. I know that when I use this component, it’s tested and validated to make sure it behaves as expected.
Okay, so we’ve looked at two examples so far of storage infrastructure and of compute infrastructure, but there’s one last one I wanna touch on just very briefly, and that’s application infrastructure. So one of the other components that we have available inside the registry is the API Gateway. So API Gateway is obviously a service inside AWS that lets me build serverless REST APIs. And so with Pulumi and the AWS API Gateway component, will make it really, really easy to create one of these API Gateway components directly from within your code. And so here’s the documentation for that API Gateway.
We’ll just go ahead and take this particular example. We’ll actually come over into our original code base here, and we’ll just replace this with that code. So I’m gonna hit save to deploy this. We’ll actually see that because I removed the existing code, we’re going to delete a number of things that were provisioned already. I have some things I didn’t mean to have here.
Okay, there we go. Let me go ahead and hit save. I got that feedback, I got that error really quickly. That’s good. I didn’t have to wait for this to deploy, but now it’s deploying.
We’ll actually see this is gonna destroy some of the existing infrastructure and stand up the new infrastructure. So it’s gonna stand up a variety of things to support this REST API, as well as a number of things to support this callback function. But the interface that I specify here is really simple. I say I want a REST API. I say I want the routes to be just a single route, and for that route to be the root path, the method GET, and then when that’s called, it’s gonna invoke this function F.
And this function F is just some code I’ve written in line here that’s using a callback function, which lets me actually specify the implementation of this callback right in line. So I specify that I wanna log the call and that I wanna return a 200 that says hello. So I’ve exported this as URL again. So I can just come over here, curl that URL endpoint. And now, instead of pointing to the S3 bucket, which has now been removed, it’s pointing to this function that I’ve specified in API Gateway.
So a really simple way to build up serverless applications by using these higher-level building blocks that are available inside the Pulumi Registry to do things like deploying an API Gateway REST API. Okay, so we’ve seen several examples of how we can kind of use something like Pulumi. But really these come back to, all these examples lean on the ability to use programming language capabilities, use software engineering capabilities to get developer productivity within the IDE, to create abstractions within functions or libraries or packages or versioning, and ultimately to work with the entire breadth of the cloud, from AWS, Azure, GCP and Kubernetes, to a wide variety of components built on top of them. That’s it for me. Thank you. Back to Joe.
Well thank you, Luke. That was super exciting. I think it’s one thing to talk about it and it’s another thing to see it in action. I think that was a really great demonstration of a lot of the concepts of really going from infrastructure as code to infrastructure as Software, and giving us a way to go from being buried in YAML to programmable building blocks and reusable architectures.
So that technology is one thing, but really this idea of getting developers more in the driver’s seat, really empowering infrastructure teams to go to the next level goes well beyond just the technology. I think, you know, the infrastructure as software approach is a necessary prerequisite, but not sufficient on its own to do what we’re calling cloud engineering, which is bring the cloud closer to developers, bring great software engineering practices to infrastructure teams, break down the walls between the two sides of the house and let people really collaborate at an entire rapid new pace of productivity. And to talk a little bit more about that, I’m thrilled today to have Ken Exner, the GM of Developer Tools from Amazon Web Services, here to chat with me a little bit about the role of developers in the Modern Cloud era. Thanks for being here, Ken.
Of course, good to be here. Thank you for having me. Yeah, I’d love to hear just a little bit about kind of what your role at AWS is, and then we’ll jump into some fun topics to chat through together. Sure. So I manage Developer Tools for AWS, sort of a portfolio of products that are targeting developers and making their lives easier in developing on AWS. So it’s everything from infrastructure as code tools, like CloudFormation and CDK, to the SDKs and CLIs that developers use, to services like Amplify, AppSync, the code services, Cloud9.
So a big portfolio of tools and services for helping developers be productive on AWS. Awesome. Yeah, it’s always fun to chat with a fellow developer productivity and developer tools nerd. It’s frankly what gets me up out of bed every morning. So great to have you here.
Yeah, maybe to kick it off, I think we’re seeing much more of a transformation in terms of the way that developers and infrastructure teams are working together. And I think one of the catalysts for that is sort of the Modern Cloud architectures. This move from monolithic simple applications to more distributed architectures, which is frankly really exciting for developers. Talk me a little bit through why does the Modern Cloud change the way that we should approach how we build software?
Sure. So I think a lot of the traditional architectures are based around this monolithic application architecture where you have a server and your goal as a developer is to get software onto that server.
One of the things that has happened with modern architectures is while we made it a lot easier to operate these things, we’ve also introduced a lot of complexity in the distributed nature of these things. So if you look at a typical modern architecture, you no longer have to manage these servers. We’ve created this serverless environment, we’ve created a lot of these capabilities that make it easier for developers and operators to use these pieces, either for managed services, or you’re using container environments. But there’s a lot of moving parts. So a typical application, modern application, will be sort of a dance of microservices and managed services, or maybe you’re using some serverless services from AWS, maybe you have some distributed microservices are part of your architecture, you have different data stores.
So it’s become sort of this network of all these distributed pieces that a developer has to think about. I think one of the things that I wanna see us do better as an industry is get better at making people productive in this distributed architecture. It’s a lot easier to operate. You’re able to get a lot of benefits from being able to use all these managed services. But how do you make it easy for developers to develop against that, I think, is the challenge I think Pulumi and AWS see, is sort of the next thing in productivity.
We need to make that easier. Yeah, absolutely. I still remember, frankly, this is sort of the era that I think AWS was born out of, but back in the early to mid-2000s I was working at Microsoft, and we had all these XML Web services and distributed architectures. But really the world today has come so far beyond that, where the systems are much more loosely connected, we’re shipping different pieces at different rates. Why is this, do you think, exciting for developers, and how do you see their role in creating, developing, maintaining, these more distributed architectures?
I think this is sort of the entire story of DevOps, right? It’s the developer is now part of the story in defining infrastructure. The line between infrastructure, operations and development has become much blurrier. A developer typically has to manage infrastructure as well. Think about infrastructure. Sometimes it’s as little as they may be responsible for creating their own containers. Is that infrastructure? Is that operations? It’s a little bit of both.
At the same time, you’re seeing IT and traditional operations folks have to pick up development responsibilities. So a typical cloud Center of Excellence or IT shop is starting to pick up development responsibilities as well. And sort of the line between a developer and operator has really, really become blurry, and something that most professionals who work in this space typically have to wear both hats, at least some part of their time. They have to be an operator and a developer and think like both personas.
What is the ideal interface between the, let’s say, operations and developers? Is it code? Is it point and click? Is it ticketing? Is it Kubernetes? What are you seeing in your customers? Well, I hope it’s not ticketing.
I think the space that we both play in, sort of infrastructure as code, application code, I don’t wanna get into a debate about declarative versus imperative languages, but having an artifact that is sort of the contract between infrastructure and application code, I think that’s the right answer. The infrastructure as code space, whether it gets realized as declarative or imperative languages, is sort of the way to think about your infrastructure. It’s the way to sort of reason about your infrastructure. It gives you something that can be version controlled, that can be code reviewed, that can be used to sort of describe your infrastructure at any point in time. You can recreate your infrastructure from that definition.
I think these are important improvements in how we think about and manage our applications and our infrastructure, is being able to put it into code that can then be all the benefits of code. You can reason about it like code. You can do a diff, you can do a code review, you can version control it. All these things that are super important that have made developers productive. You’ve taken that to infrastructure, given the power of code to your infrastructure management.
I think it’s a big, important movement in how we manage our infrastructure as an industry. Absolutely, I mean, once you do infrastructure as code, whether it’s declarative or imperative, you have a artifact, you can version it, you can bring a lot of the lessons learned that we know about software engineering and now apply it to infrastructure. And I think it’s very important not to try to boil the ocean. One of the benefits is a lot of these infrastructure as code tools do have ways to integrate with existing resources. I think it’s easy to generate some CloudFormation from your existing resources, or just pick specific pieces to modernize one at a time.
So that’s really, really good advice there. I think one of the things that languages and code gives us, let’s say, is abstraction, encapsulation, the ability to build bigger things out of smaller things. One of the things that always struck me and resonated with me about the AWS platform in particular is you’ve got hundreds of building block services, and you can stitch them together in infinite numbers of ways to create infinitely rich and capable cloud services out of these different building blocks. It can be daunting though, to understand the proper way to configure all these things. But with code, we can now begin to up-level the level of abstraction that we’re programming against and think about entire systems or entire architectures, not just the building blocks.
Where does this all go from here? Are you excited about that capability? I think what you’re sort of describing is sort of the history of abstractions and software. Software is an evolution of abstractions. You’re trying to build on previous abstractions. EC2 is an abstraction of a server. Lambda’s a further abstraction.
So what you’re seeing is just abstraction being built on top of abstraction on top of abstraction. And I think you’ll see that continue to happen. One of the things I’m excited for in AWS is making it easier for people to start using the 200 plus services that we have. How do you create abstractions that allow people to be more productive at a higher level than the lower-level building blocks? It’s important that we provide lower-level building blocks, but also that we start going higher and higher and making it easier for people to develop in particular use cases. Typically this is around a particular types of use cases like maybe someone wants to build a front-end or mobile application.
So we started developing abstractions like Amplify in AWS, that allows you to work at that level of abstraction rather than at the API Gateway Lambda levels of abstraction. So I think we will continue to see more and more of this, trying to push it up and make sure that people can be more productive at higher levels of abstraction. I think with Pulumi as well, and in some of the infrastructure as code, there’s an opportunity for us to do things there as well. You can define an AWS resource, or you can create higher levels of abstraction. Maybe you would create an SNS resource or an SQS resource.
But if you combine them together and create a higher-level component resource, you can do Pub/Sub. And that new abstraction provides sort of an architectural pattern for how to use these two lower-level building blocks. So I’m excited about doing more together with Pulumi and others to create these new higher-level components. Take the building blocks that we have and build higher-level application building blocks that make it easier to develop with opinionated patterns rather than having to stitch everything together at the low-level building block. So I think you guys have something called a component resource or a resource component.
I wanna take that idea and go bigger with that. How do we create sort of the ability for developers to start creating these higher-level patterns and sharing them with others? I think we can create an entire programming model based on these abstractions that we can build on top of Pulumi and CDK and other infrastructure as code frameworks. Well thanks, Ken, for sharing the perspectives there. I totally agree with everything we just discussed. I think I’m really excited to go from building blocks to these architectures and patterns.
I think that’s the next level in this cloud evolution is not reinventing the wheel, really, really putting the builders first and really enabling us to build bigger things out of smaller things. And I think that’s really exciting. I think in terms of putting this into practice in your own teams, really putting the builders first and putting them in the driver’s seat is kind of the first step. So I’ll talk through a little bit about different ways we’re seeing teams organize and empower developers. The unfortunate news is there is no one right answer, but this is definitely a truism across all of the different models that we’re seeing in practice.
I think enabling the folks who are gonna innovate to do that is key. It may sound obvious, but you look at where we’re coming from, and many developers just don’t have the ability to spin up the infrastructure they need. It’s surprising to me, but we still talk to folks all the time who, developer has to file a ticket to get some infrastructure, and then wait a long time to get that. Up to a month or longer. And that’s no fun. That’s not the recipe for moving fast. Really operating with code is the key that we’re really driving towards here. This whole infrastructure as software approach. This is not new, in a sense. We’ve been on this DevOps journey for over 10 years.
In fact, I’ve gotten all this way into the talk and I don’t think I’ve said the word DevOps once, but I wanna really tip my hat to kind of everything that’s come before. And I think DevOps was an essential movement to getting where we are today. I think the unfortunate reality is DevOps really brought more dev to the ops than it did the opposite of empowering the developers to really get their hands on cloud infrastructure. Which is great because cloud engineering’s about going both ways, bringing the cloud closer to developers, but also delivering great software engineering practices to infrastructure teams, which DevOps really played an essential role in doing. And it’s really laid the foundation for the shift to cloud engineering.
And I think this is really taking a lot of the lessons of DevOps and taking them to the next level in terms of really, software engineering, even in the context of infrastructure. Yes, we used infrastructure as code and Config Tools, and we did some amount of testing when it came to DevOps, but we didn’t really get all of the benefits that we talked about. The software engineering “Desiderata” slide from earlier, a lot of those things didn’t really apply in the realm of DevOps. And so this really is about supercharging infrastructure teams’ ability to get things done. The demands on infrastructure teams are greater than ever before.
And so this is one way to keep up with those demands. Empowering developers, from the perspective of an infrastructure engineer, is great because that means the infrastructure team isn’t always on the hook for getting everything done and getting blamed when schedules slip. Of course there’s often gotta be guardrails in place. For example, if you’re empowering a developer, that developer may not be the expert in all things security or how much things cost. And the infrastructure team, really, they have to keep control over those things and make sure that there’s not a serious security incident and so on.
So cloud engineering is about empowering the right people to do the right job at the right time. I think one pattern we often see, especially with early stage startups, or mid-stage startups, you know, folks that were fortunate enough to be born in the cloud really are adopting models that are much closer to what Amazon Web Services themselves internally evangelize. You know, if you don’t need to create a separate DevOps organization or infrastructure organization at your scale, you know, that’s almost always preferable for folks because you can just empower the builders to build. And over time you have infrastructure experts who emerge. You know, configuring a virtual private cloud in Amazon? Most developers are gonna, you know, be bored out of their minds trying to figure out how best to do that.
That’s where an expert in infrastructure and a domain expert in networking, for example, really can be valuable. But in the early days, if you’ve got a building block and it’s a virtual private cloud that’s been written in Pulumi, for example, and there’s a module and you can just go pick it up off the shelf, and you don’t have to become an expert in how that thing is configured, that’s probably preferable. Especially if your goal is to get something shipped. If you’re in a Y Combinator incubator and you’ve got demo day coming up next week, the last thing you wanna do is spend three days configuring a VPC. So use tried and true best practices, get up and running.
And you know, this whole two pizza thing is, size of the team should be no bigger than can consume two pizzas. You know, it’s kind of rule of thumb for service teams. And the nice thing about that is it keeps the boundaries between the teams pretty reasonable, and you can get your arms around it. You can define it with an API, so that you’re, again, not dealing with ticketing, you’re reducing the amount that humans have to be in the loop and really using software as the interface between the teams. So this is a very popular model, even in larger companies.
You know, obviously Amazon is quite a large company doing this at an incredible scale. I will mention there’s often site reliability engineering that emerges as a practice within these teams as well. So you often have an SRE expert who’s really coaching the team on how to run a highly reliable, you know, scalable service. A common pattern we see also in larger organizations is this concept of a platform team. The platform team often is trying to empower the organization around them.
That includes the infrastructure and operations team. It also includes the developers. These are the folks that are usually setting up a Kubernetes cluster, usually setting the standard for how infrastructure is done in the organization. And, you know, this can be exciting for two reasons. One, the platform team can really focus on shipping the best platform. And the platform itself is a product for that team. Their customer is the developer within their organization or the operator within the organization. So it’s exciting for the infrastructure team because they can focus on building this amazing platform and really specialize in that. Historically this would have been PAS, or what have you. Kubernetes thankfully has given us a standard foundation to build on.
And so whether it’s a PAS or just a opinionated assortment of services, often using the building blocks to create these reusable architectures, that that’s typically the approach that these platform teams are taking. And then we’ve got developers, and developers benefit in this world as well, because the developer can spin up infrastructure typically. And usually the interface is code, using infrastructure as software. And platform team can give them these reusable components and developers can spin them up. You know, one customer we work with actually has an opinionated Kubernetes cluster.
They call it a microservice environment. So if a developer needs a dev or test environment, they can just go spin up one, you know, and be productive and just focus on building their microservices and building their applications. And that’s another form of developer-first, right? Developer-first isn’t just about developers being in the driver’s seat and oh, we don’t need DevOps or infrastructure teams. That’s not what developer-first is about. Developer-first is about thinking of the developer, and in this model the platform team really is thinking deeply about how to empower the developer.
And I think that’s an exciting pattern as well. But there are a lot of different ways to organize, and frankly, people oscillate between these models as they grow and as they scale. And I think really the number one takeaway here is, you know, one plus one equals three. We’re seeing that thanks to this new approach to thinking about the cloud as an operating system, thinking about infrastructure as something we use software to deal with, that we’re breaking down the barriers between infrastructure teams and engineers and developers, and really enabling teams to build better things together. And that’s the best possible outcome that we can see.
And so in summary, just to reiterate kind of some of the things we’ve chatted about today, so developer-first infrastructure really is about empowering builders to build, first and foremost. And empowering developers and infrastructure experts to build great cloud software. Infrastructure as software, not code, is the gateway into that world. You know, it’s what gives you programmable building blocks that can be assembled in infinite ways to build infinite new capabilities into your applications. Really, that foundation allows us to move beyond just the building blocks to reusable architectures.
And I think that’s the way we stop reinventing the wheel. Every time I sit down to spin up a network in Amazon Web Services, I don’t wanna have to go read that 15 page white paper, I just wanna use an architecture off the shelf that’s written in software that I can compose, just like I do any of my application components as well. And we’re finally there, that we can do that with infrastructure as software. And finally, cloud engineering is the practice. I wish it were as easy as just sprinkling some technology magic pixie dust and everything just works, but it turns out actually how we work together as a team is the most important thing, and often the most difficult to get right.
So I think empowering developers and empowering the builders, we’ve covered that, but that really is the first step in terms of getting to cloud engineering. And really working together and breaking down that wall between infrastructure teams and developers, is what leads to this one plus one equals three. So thank you very much for being here today. It was great to take you through the journey of developer-first infrastructure, the cloud as a new operating system, infrastructure as software, and cloud engineering. I hope you learned a thing or two, and I hope you enjoy the Cloud Engineering Summit. Thank you.
Pulumi is open source and free to get started. Deploy your first stack today. | https://www.pulumi.com/resources/developer-first-infrastructure/ | CC-MAIN-2022-05 | refinedweb | 9,782 | 62.07 |
Hi -
the typescript file gives me the above error, the compiled javascript is fine.
So it's more of an irritation than a real problem.
My ts file:
import d3 = require('d3');
import TweenMax = require('TweenMax');
This is how the js file starts:
define(["require", "exports", 'd3', 'TweenMax', "events/event"], function (require, exports, d3, TweenMax, CustomEvent) {})
I have the definition files:
/// <reference path="../../typings/greensock/greensock.d.ts" />
/// <reference path="../../typings/d3/d3.d.ts" />
Code completion does work for TweenMax --> cmd-click takes me to the definition file.
What could cause the above error?
Thank you for your help!
Ulrich Sinn
greensock.d.ts is not an external module - that's the problem.
So you can't use import here. See, | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206325699-Error-12-27-TS2307-Cannot-find-external-module-TweenMax-require-js- | CC-MAIN-2020-16 | refinedweb | 122 | 69.58 |
Integer overflow – sound familiar, isn’t it. This is one of the things every developer must have experience at least once during programming lifespan. It is a very common design flaw that acts as an silent killer and will crash the application/transaction or lead to erroneous result once overflows. So what is a actually a overflow? Simply put , it is when you are trying to put a larger or smaller value into a variable then what it can hold.
When you declare a variable of a given data type say int, the maximum and minimum value that can put into this variable is limited by its data type. The max and min value of int data type in Java is, 2^31-1 or (2147483647) and -2^31 or (-2147483648) resp.
The max and min value is defined as a constant in JDK as seem by the below snippet.
public class IntLimits { public static void main(String[] args) { System.out.println("Max int value " + Integer.MAX_VALUE); System.out.println("Min int value " + Integer.MIN_VALUE); } }
Use-cases
Some of the very common use-cases where this issue exhibits is, the use of int datatype to store the value of sequence retrieved from the database to be used as unique id for a given transaction using JDBC. Generally the sequence starts from a smaller number and every time we get the nextval from the the sequence, it increases it value by 1. So depending on the volume of the transactions, one day the sequence value will cross the max int value and this will cause an int overflow , resulting in runtime error. So whenever dealing with sequences which can run into high numbers make sure you use a long data type instead. | https://www.codercrunch.com/post/459060557/what-is-int-overflow-in-java | CC-MAIN-2019-35 | refinedweb | 291 | 61.36 |
I am writing my first Python app with PyQt4. I have a MainWindow and a Dialog class, which is a part of MainWindow class:
self.loginDialog = LoginDialog();
I use slots and signals. Here's a connection made in MainWindow:
QtCore.QObject.connect(self.loginDialog, QtCore.SIGNAL("aa(str)"), self.login)
And I try to emit signal inside the Dialog class (I'm sure it is emitted):
self.emit(QtCore.SIGNAL("aa"), "jacek")
Unfortunately, slot is not invoked. I tried with no arguments as well, different styles of emitting signal. No errors, no warnings in the code. What might be the problem?
You don't use the same signal, when emitting and connecting.
QtCore.SIGNAL("aa(str)") is not the same as
QtCore.SIGNAL("aa").)
and if you pass any parametres in emit, your login method must accept those parametres. Check, if this helps :-)
There are some concepts to be clarified
[QT signal & slot] VS [Python signal & slot]
All the predefined signals & slots provided by pyqt are implemented by QT's c++ code. Whenever you want to have a customized signal & slot in Python, it is a python signal & slot. Hence there are four cases to emits a signal to a slot:
The code below shows how to connect for these four different scnarios
import sys from PyQt4.QtCore import * from PyQt4.QtGui import * class Foo(QtCore.QObject): def __init__(self, parent=None): super(Foo, self).__init__(parent) dial = QDial() self.spinbox = QSpinbox() # -------------------------------------- # QT signal & QT slot # -------------------------------------- # option 1: more efficient self.connect(self.spinbox, SIGNAL("valueChanged(int)"), dial, SLOT("setValue(int)")) # option 2: self.connect(self.spinbox, SIGNAL("valueChanged(int)"), dial.setValue) # -------------------------------------- # QT signal & Python slot # -------------------------------------- self.connect(self.spinbox, SIGNAL("valueChanged(int)"), self.myValChanged) # -------------------------------------- # Python signal & Qt slot # -------------------------------------- # connect option 1: more efficient self.connect(self, SIGNAL("mysignal"), dial, SLOT("setValue(int)")) # connect option 2: self.connect(self, SIGNAL("mysignal"), dial.setValue) # emit param = 100 self.emit(SIGNAL("mysignal"), param) # -------------------------------------- # Python signal & Python slot # -------------------------------------- # connect self.connect(self, SIGNAL("mysignal"), self.myValChanged) # emit param = 100 self.emit(SIGNAL("mysignal"), param) def myValChanged(self): print "New spin val entered {0}".format(self.spinbox.value())
Conclusion is --
Signal signature for Python signal differentiate from that of QT signal in that it doesn't have the parenthesis and can be passed any python data types when you emit it. The Python signal is created when you emit it.
For slot, there are three forms of signatures.]
Well, all the description above is based on the old style pyqt signal & slot. As @Idan K suggested there is an alternative new-style to do the things, especially for the Python signal. Refer to here for more. | https://pythonpedia.com/en/knowledge-base/2048098/pyqt4-signals-and-slots | CC-MAIN-2020-29 | refinedweb | 442 | 53.27 |
Proposed features/Mining
- Status
- Work in progress - Voting
- Proposed-by
- Dennis de
- Proposal-date
- 2007-09-13
- RFC-date
- 2007-11-14
- Voting-Start
- 2007-12-05
- Voting-End (regular)
- 2007-12-19
Contents
Proposal
In my region there are some very large surface mines (up to 4000 hectares of area) which I would like to tag. See wikipedia:de:Tagebau and wikipedia:surface mining for details on such mines.
I propose landuse=mining as the tag for mining in general. --Dennis de 15:36, 13 September 2007 (BST)
Current Tags
There currently is landuse=quarry which is specific to a specific version of a surface mine for stones. I think this is not adequate to use on other types of mines. A quarry could even be defined one type of a mine as described below.
and Proposed features/surface_mine(abandoned)
Proposed Realisation
Applies to
- areas - preferable
- nodes - for really small mines or one with unknown extends but known characteristic points (like the entrance of an underground-mine)
- ways - if someone knows the course of a mine-shaft, a way would be suiteable for this.
Additional Tags to be used with mining
- surface=(yes|no). This could/should have influence on the renderer as underground mines will conflict with other areas or ways
- resource= - whatever they mine in that mine, e.g. lignite, stones (which could superseed quarry). This could be rendered in some way (different colors?), but I think it's mainly metadata for the time being.
- name= - If the mine has a name. Would be nice to have it rendered like the names of lakes are (in the middle of the area or at the point)
- name:languagecode= - could be used if it has different names in different languages (e.g.:
name:de=Tagebau Hambach)
Discussions
On the _edited_ Proposal
- I edited the first proposal to reflect the discussion and my own new ideas and will send a RCF for this. --Dennis de 09:09, 14 November 2007 (UTC)
- This looks good to me. Robert Wyatt
- Looks good to me, there are areas in the world with huge areas that can be marked out in this way and, as Dennis de says, it is more general than landuse=quarry. MikeCollinson 18:44, 14 November 2007 (UTC)
- Great, looks fine, plenty of uses for this around me. i'll vote for this Myfanwy 21:19, 4 December 2007 (UTC)
- About rendering, how about something in the greys for an area ? --PhilippeP 08:58, 6 December 2007 (UTC)
- A light grey would be similar to landuse=residential (and other landuses in lower zooms), so it should be at least a bit different from the light tone. I thought about a light brown, something like , which looks like a mixture of grey and brown. Mixing it with even more grey would look that way --Dennis de 10:08, 6 December 2007 (UTC)
- A specific tone of grey is not important for me, to even more distingish from residential area, there could be a darker border (residential aera has no-or same color border I think).--PhilippeP 14:05, 10 December 2007 (UTC)
- On the principle it's ok, if you restrict to the whole area of surface mining without any detail. I won't tag a point or a line as mine. A point is a shaft, a line a gallery, tunnel, conveyor belt,outcrop,etc. I would leave the underground operation aside (difficult to map, obviously no GPS) and maybe add some tags for tailings, pit. How about the difference between mine in activity and abandonned ? --Gummibärli 20:08, 7 December 2007 (UTC)
- First of all your right: I mixed the shaft with tunnel etc.. But landuse=mining is correct for them too, they are used for mining regardless of their actual form.
- Why leave the underground aside? If some has good data for it, then it would be great to have it. I don't has to be rendered though (would be confusing). Of course it's not doable with GPS, but in case someone opens a free source of underground-mine-data the feature is ready for this. Only because it's currently not doable does not mean it can't be considerated IMHO.
- The activity is a good addition I would definately have included in this proposal if I'd have thought about it. As the voting has started I don't want to change the proposal till it's over, because the votes are for or against the current form.
For the realisation: There was a proposal for Disused Railways, which is deserted now, but had the idea to get a more general tag
use_statusfor highways and railsways. IMHO such a general tag would be better than a single solution for mining. --Dennis de 09:20, 8 December 2007 (UTC)
- Maybe I don't get it. How would you use the tag for subsurface operation ? It's nice 3D. Not clear to me, but I dont what to block your proposition, just changed my vote to approve --Gummibaerli 22:46, 8 December 2007 (UTC)
- Of course it only makes sense if there's data and if it's in a 2D-representation - at least as long as elevation is not handled. --Dennis de 13:22, 10 December 2007 (UTC)
- i think the idea for shaft mines was to mark the areas which are visible above the ground. specifically the winding gear, and offices/lunch rooms, etc. Myfanwy 20:16, 12 December 2007 (UTC)
- Useful for the brickworks in Bedford. I presume that they can contain lakes? Ojw 12:19, 8 December 2007 (UTC)
- Additional tags for miningProposed_features#Proposed_Features_-_Industrial Gummibaerli 10:48, 10 December 2007 (UTC)
- IMO, the surface and resource tags should be namespaced, i.e. mine:surface=yes/no and mine:resource=*. Especially as surface conflicts with the highway surface usage. --Hawke 09:57, 15 December 2007 (UTC)
Surface=yes conflicts with existing use of the surface tag, which describes the ground (e.g. sand, mud, concrete, paved, tarmac, cobblestone, gravel) Ojw 11:54, 15 December 2007 (UTC)
On The _first_ Proposal
I think there should be an additional tag to identify them as surface-mines (perhaps under-surface-mines will come into OSM sometime too). Would it be a good idea to consider all landuse=mining with a layer < 0 as under-surface and layer=0 for surface-mining? --Dennis de 15:36, 13 September 2007 (BST)
What is the difference between a surface mine and a quarry which already has a defined feature? User:chillly 16:25 20 October 2007 (BST)
- According to wikipedia:Quarry it's a specific surface mine for stones. Our surface mines extract lignite (brown coal). So a quarry is a surface mine, but not every surface mine is a quarry. --Dennis de 16:56, 24 October 2007 (BST)
Also how are you planning on differentiating this from deep mining? ShakespeareFan00 11:37, 23 October 2007 (BST)
- I don't know what you exactly mean. :( In which way differentiating? --Dennis de 16:56, 24 October 2007 (BST)
- Shakepeare means the difference between a mine with a shaft which goes underground and is tunnelled, or one which is a big pit. a big pit mine is usually called an opencast mine. it is a valid question, one i was intending to ask. they appear very different on the surface, despite being used for similar substances. Myfanwy 18:14, 14 November 2007 (UTC)
- I'm sorry, I missed your comment. In the updated proposal there's an additional surface-tag, which can be used to differentiate them. Thinking about it, the best way to model shafts are of course ways. So would it be OK to have this tag apply to ways in combination with ways to fit your needs? Other ideas are welcome. --Dennis de 09:51, 29 November 2007 (UTC)
Voting
- As the proposer naturally I would approve this proposal :) --Dennis de 15:58, 5 December 2007 (UTC)
- I approve this proposal -- MikeCollinson 16:21, 5 December 2007 (UTC)
- I approve this proposal -- Ulfl 18:14, 5 December 2007 (UTC)
- I approve this proposal if landuse=quarry is deprecated --PhilippeP 18:38, 5 December 2007 (UTC)
- I disapprove this proposal while it includes shaft mines. shaft mines shold be a separate proposal as they are not landuse, only a (relaitvely) small point, also it clashes with quarry, and should not be a landuse tagMyfanwy 18:56, 17 December 2007 (UTC)
- I approve this proposal -- Nils 19:21, 5 December 2007 (UTC)
- I approve this proposal --EdoM (lets talk about it) 08:24, 6 December 2007 (UTC)
- I approve this proposal, but basically disapprove the tag for subsurface operation (see discussion) --Gummibärli 23:08, 8 December 2007 (UTC)
- I approve this proposal. --Franc 04:35, 10 December 2007 (UTC)
- I approve this proposal, but disapprove this usage of surface and resource tags. --Hawke 09:58, 15 December 2007 (UTC)
this proposal has closed, it has been rejected | http://wiki.openstreetmap.org/wiki/Proposed_features/Mining | CC-MAIN-2016-50 | refinedweb | 1,498 | 60.45 |
after upgrading to OSX Mojave double-clicking on a script file in Unity prompted an upgrade of visual studio which appearded to work ok, but now I get no intellisense and all my namespace includes are coloured red to showan error any idea why the namespaces arent referenced properly? or how to fix..? (i've attached a screenshot to illustrate what I mean in the ide. [1]: /storage/temp/144808-screenshot.png
this completely halts any work for me in Unity so I'd greatly appreciate.
Unity instance missing in "Attach Unity Debugger", Visual Studio 2013. How do I fix this?
6
Answers
Unity3.5.3 user prefrefence has no option for VS
0
Answers
Error CS0019 Operator `+´ cannot be applied to operands of type `float´ and `WeaponObject´ Has anyone an idea? I'm new at unity and haven't any ideas!
0
Answers
Visual Studio Solution has 3 projects. Is it normal?
0
Answers
Using Visual Studio 2015 Diagnostic Tools with Unity project
2
Answers | https://answers.unity.com/questions/1658613/cannot-script-post-mojave-install.html | CC-MAIN-2020-10 | refinedweb | 166 | 66.84 |
Get this Question solved
Choose a Subject » Select Duration » Schedule a Session
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
cost savings of $ 40 per hour . The annual cash expenditures of operating the CAD system are estimated to be $ 200,000 . The CAD system requires an initial investment of $ 500,000 . The estimated life of this system is five years with no salvage value. The tax rate is 35 percent , and United...
rate is 35 percent , and United Technologies uses straight-line depreciation for tax purposes. United Technologies has a cost of capital of 20 percent Compute the annual after-tax cash flows related to the CAD project Payback period . () Net present value. ( Round answer to the nearest whole number...
cost savings of $50 per hour . The annual cash expenditures of operating the CAD system are estimated to be $250,000. The CAD system requires an initial investment of $ 500,000 . The estimated life of this system is five years with no salvage value. The tax rate is 40 percent , and United...
decided to use straight-line depreciation for tax purposes, using the required half- year convention. The tax rate is 40 percent . The projected life of the system is 10 years . The hurdle rate is 16 percent for all capital budgeting projects , although the company’s cost of capital is 12 percent
and a salvage value of $ 40 ,000. Determine the project s net present value at a discount rate of 26 percent ( Round to the number . Use a negative sign with your answer appropriate) $ 0 Determine the project s approximate internet rate of return ( Round to the nearest whole percentage.) $ 0
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files | https://www.transtutors.com/questions/payback-period-and-npv-taxes-and-straight-line-depreciation-assume-that-united-techn-2562819.htm | CC-MAIN-2018-34 | refinedweb | 315 | 57.37 |
Feature #6588
Set#intersect?
Description
There is
Set#superset?,
Set#subset? with their
proper variants, but there is no
Set#intersect? nor
Set#disjoint?
To check if two sets intersect, one can do
set.intersection(other).empty?
This cycles through all elements, though. To be efficient, one can instead do the iteration manually:
other.any? { |x| set.include?(x) }
I think it would be natural to add
Set#intersect? and its reverse
Set#disjoint?
class Set
def intersect?(enum)
enum.any? { |x| include?(x) }
end
end
Maybe it would be worth it to optimize it if enum is a larger Set by starting it with
return any? { |x| enum.include?(x) } if enum.is_a?(Set) && enum.size > size
Associated revisions
History
#1
Updated by Yusuke Endoh almost 3 years ago
- Status changed from Open to Assigned
#2
Updated by Yutaka HARA over 2 years ago
- Target version changed from 2.0.0 to next minor
#3
Updated by Marc-Andre Lafortune over 2 years ago
Comment about these simple features would be appreciated.
#4
Updated by Alexey Muranov over 2 years ago
+1. Maybe
#meet? instead of
#intersect? ? It can be argued that any set intersects any other, just the intersection is sometimes empty :).
#5
Updated by Marc-Andre Lafortune over 2 years ago
alexeymuranov (Alexey Muranov) wrote:
+1.
Thanks for the +1
It can be argued that any set intersects any other, just the intersection is sometimes empty :).
No, I believe it would be wrong to argue that. From wikipedia: "We say that A intersects B if A intersects B at some element"
Moreover: "We say that A and B are disjoint if A does not intersect B. In plain language, they have no elements in common"
I believe that both
intersect? and
disjoint? are the established terms for the concept I'm proposing.
#6
Updated by Akinori MUSHA almost 2 years ago
OK, accepted. I'll work on it.
#7
Updated by Akinori MUSHA almost 2 years ago
- Status changed from Assigned to Closed
- % Done changed from 0 to 100
This issue was solved with changeset r42253.
Marc-Andre, thank you for reporting this issue.
Your contribution to Ruby is greatly appreciated.
May Ruby be with you.
Add Set#intersect? and #disjoint?.
#8
Updated by Akinori MUSHA almost 2 years ago
I followed superset?() and the like, and made the new methods accept only a set for the moment, because I couldn't come up with an idea of how to deal with Range. For example:
- if Set[2].intersect?(1.5..2.5) should return true
- if Set[2].intersect?(3..(1.0/0)) should immediately return false using some knowledge on Range
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/6588 | CC-MAIN-2015-27 | refinedweb | 451 | 67.35 |
Instead of going via brute-force to check every triple for the triangle property (which will be O(n^3)), we first sort the array in ascending order.
We take two nos in the triple from left to right and then iterate over the other nos from left to right, counting each third no. We stop the iteration as soon as we encounter a no that becomes >= the sum of the first two taken.
public class Solution { public int triangleNumber(int[] nums) { int count = 0; Arrays.sort(nums); for(int i = 0; i <= nums.length - 3; i++) { for(int j = i + 1; j <= nums.length - 2; j++) { for(int k = j + 1; k <= nums.length - 1 && nums[i] + nums[j] > nums[k]; k++) { count++; } } } return count; } } | https://discuss.leetcode.com/topic/92283/java-solution-with-explanation | CC-MAIN-2017-39 | refinedweb | 126 | 82.75 |
This Tutorial Explains how to Implement the Dijkstra’s algorithm in Java to find the Shortest Routes in a Graph or a Tree with the help of Examples:
In our earlier tutorial on Graphs in Java, we saw that graphs are used to find the shortest path between the nodes apart from other applications.
To find the shortest path between two nodes of a graph, we mostly employ an algorithm known as “Dijkstra’s Algorithm”. This algorithm remains the widely used algorithm to find the shortest routes in a graph or a tree.
=> Check ALL Java Tutorials Here
What You Will Learn:
Dijkstra’s Algorithm In Java
Given a weighted graph and a starting (source) vertex in the graph, Dijkstra’s algorithm is used to find the shortest distance from the source node to all the other nodes in the graph.
As a result of the running Dijkstra’s algorithm on a graph, we obtain the shortest path tree (SPT) with the source vertex as root.
In Dijkstra’s algorithm, we maintain two sets or lists. One contains the vertices that are a part of the shortest-path tree (SPT) and the other contains vertices that are being evaluated to be included in SPT. Hence for every iteration, we find a vertex from the second list that has the shortest path.
The pseudocode for the Dijkstra’s shortest path algorithm is given below.
Pseudocode
Given below is the pseudocode for this algorithm.
procedure dijkstra(G, S) G-> graph; S->starting vertex begin for each vertex V in G //initialization; initial path set to infinite path[V] <- infinite previous[V] <- NULL If V != S, add V to Priority Queue PQueue path [S] <- 0 while PQueue IS NOT EMPTY U <- Extract MIN from PQueue for each unvisited adjacent_node V of U tempDistance <- path [U] + edge_weight(U, V) if tempDistance < path [V] path [V] <- tempDistance previous[V] <- U return path[], previous[] end
Let’s now take a sample graph and illustrate the Dijkstra’s shortest path algorithm.
Initially, the SPT (Shortest Path Tree) set is set to infinity.
Let’s start with vertex 0. So to begin with we put the vertex 0 in sptSet.
sptSet = {0, INF, INF, INF, INF, INF}.
Next with vertex 0 in sptSet, we will explore its neighbors. Vertices 1 and 2 are two adjacent nodes of 0 with distance 2 and 1 respectively.
In the above figure, we have also updated each adjacent vertex (1 and 2) with their respective distance from source vertex 0. Now we see that vertex 2 has a minimum distance. So next we add vertex 2 to the sptSet. Also, we explore the neighbors of vertex 2.
Now we look for the vertex with minimum distance and those that are not there in spt. We pick vertex 1 with distance 2.
As we see in the above figure, out of all the adjacent nodes of 2, 0, and 1 are already in sptSet so we ignore them. Out of the adjacent nodes 5 and 3, 5 have the least cost. So we add it to the sptSet and explore its adjacent nodes.
In the above figure, we see that except for nodes 3 and 4, all the other nodes are in sptSet. Out of 3 and 4, node 3 has the least cost. So we put it in sptSet.
As shown above, now we have only one vertex left i.e. 4 and its distance from the root node is 16. Finally, we put it in sptSet to get the final sptSet = {0, 2, 1, 5, 3, 4} that gives us the distance of each vertex from the source node 0.
Implementation Of Dijkstra’s Algorithm In Java
Implementation of Dijkstra’s shortest path algorithm in Java can be achieved using two ways. We can either use priority queues and adjacency list or we can use adjacency matrix and arrays.
In this section, we will see both the implementations.
Using A Priority Queue
In this implementation, we use the priority queue to store the vertices with the shortest distance. The graph is defined using the adjacency list. A sample program is shown below.
import java.util.*; class Graph_pq { int dist[]; Set<Integer> visited; PriorityQueue<Node> pqueue; int V; // Number of vertices List<List<Node> > adj_list; //class constructor public Graph_pq(int V) { this.V = V; dist = new int[V]; visited = new HashSet<Integer>(); pqueue = new PriorityQueue<Node>(V, new Node()); } // Dijkstra's Algorithm implementation public void algo_dijkstra(List<List<Node> > adj_list, int src_vertex) { this.adj_list = adj_list; for (int i = 0; i < V; i++) dist[i] = Integer.MAX_VALUE; // first add source vertex to PriorityQueue pqueue.add(new Node(src_vertex, 0)); // Distance to the source from itself is 0 dist[src_vertex] = 0; while (visited.size() != V) { // u is removed from PriorityQueue and has min distance int u = pqueue.remove().node; // add node to finalized list (visited) visited.add(u); graph_adjacentNodes(u); } } // this methods processes all neighbours of the just visited node private void graph_adjacentNodes(int u) { int edgeDistance = -1; int newDistance = -1; // process all neighbouring nodes of u for (int i = 0; i < adj_list.get(u).size(); i++) { Node v = adj_list.get(u).get(i); // proceed only if current node is not in 'visited' if (!visited.contains(v.node)) { edgeDistance = v.cost; newDistance = dist[u] + edgeDistance; // compare distances if (newDistance < dist[v.node]) dist[v.node] = newDistance; // Add the current vertex to the PriorityQueue pqueue.add(new Node(v.node, dist[v.node])); } } } } class Main{ public static void main(String arg[]) { int V = 6; int source = 0; // adjacency list representation of graph List<List<Node> > adj_list = new ArrayList<List<Node> >(); // Initialize adjacency list for every node in the graph for (int i = 0; i < V; i++) { List<Node> item = new ArrayList<Node>(); adj_list.add(item); } // Input graph edges adj_list.get(0).add(new Node(1, 5)); adj_list.get(0).add(new Node(2, 3)); adj_list.get(0).add(new Node(3, 2)); adj_list.get(0).add(new Node(4, 3)); adj_list.get(0).add(new Node(5, 3)); adj_list.get(2).add(new Node(1, 2)); adj_list.get(2).add(new Node(3, 3)); // call Dijkstra's algo method Graph_pq dpq = new Graph_pq(V); dpq.algo_dijkstra(adj_list, source); // Print the shortest path from source node to all the nodes System.out.println("The shorted path from source node to other nodes:"); System.out.println("Source\t\t" + "Node#\t\t" + "Distance"); for (int i = 0; i < dpq.dist.length; i++) System.out.println(source + " \t\t " + i + " \t\t " + dpq.dist[i]); } } // Node class class Node implements Comparator<Node> { public int node; public int cost; public Node() { } //empty constructor public Node(int node, int cost) { this.node = node; this.cost = cost; } @Override public int compare(Node node1, Node node2) { if (node1.cost < node2.cost) return -1; if (node1.cost > node2.cost) return 1; return 0; } }
Output:
Using Adjacency Matrix
In this approach, we use the adjacency matrix to represent the graph. For spt set we use arrays.
The following program shows this implementation.
import java.util.*; import java.lang.*; import java.io.*; class Graph_Shortest_Path { static final int num_Vertices = 6; //max number of vertices in graph // find a vertex with minimum distance int minDistance(int path_array[], Boolean sptSet[]) { // Initialize min value int min = Integer.MAX_VALUE, min_index = -1; for (int v = 0; v < num_Vertices; v++) if (sptSet[v] == false && path_array[v] <= min) { min = path_array[v]; min_index = v; } return min_index; } // print the array of distances (path_array) void printMinpath(int path_array[]) { System.out.println("Vertex# \t Minimum Distance from Source"); for (int i = 0; i < num_Vertices; i++) System.out.println(i + " \t\t\t " + path_array[i]); } // Implementation of Dijkstra's algorithm for graph (adjacency matrix) void algo_dijkstra(int graph[][], int src_node) { int path_array[] = new int[num_Vertices]; // The output array. dist[i] will hold // the shortest distance from src to i // spt (shortest path set) contains vertices that have shortest path Boolean sptSet[] = new Boolean[num_Vertices]; // Initially all the distances are INFINITE and stpSet[] is set to false for (int i = 0; i < num_Vertices; i++) { path_array[i] = Integer.MAX_VALUE; sptSet[i] = false; } // Path between vertex and itself is always 0 path_array[src_node] = 0; // now find shortest path for all vertices for (int count = 0; count < num_Vertices - 1; count++) { // call minDistance method to find the vertex with min distance int u = minDistance(path_array, sptSet); // the current vertex u is processed sptSet[u] = true; // process adjacent nodes of the current vertex for (int v = 0; v < num_Vertices; v++) // if vertex v not in sptset then update it if (!sptSet[v] && graph[u][v] != 0 && path_array[u] != Integer.MAX_VALUE && path_array[u] + graph[u][v] < path_array[v]) path_array[v] = path_array[u] + graph[u][v]; } // print the path array printMinpath(path_array); } } class Main{ public static void main(String[] args) { //example graph is given below int graph[][] = new int[][] { { 0, 2, 1, 0, 0, 0}, { 2, 0, 7, 0, 8, 4}, { 1, 7, 0, 7, 0, 3}, { 0, 0, 7, 0, 8, 4}, { 0, 8, 0, 8, 0, 5}, { 0, 4, 3, 4, 5, 0} }; Graph_Shortest_Path g = new Graph_Shortest_Path(); g.algo_dijkstra(graph, 0); } }
Output:
Frequently Asked Questions
Q #1) Does Dijkstra work for undirected graphs?
Answer: If the graph is directed or undirected does not matter in the case of Dijkstra’s algorithm. This algorithm is concerned only about the vertices in the graph and the weights.
Q #2) What is the time complexity of Dijkstra’s algorithm?
Answer: Time Complexity of Dijkstra’s Algorithm is O (V 2). When implemented with the min-priority queue, the time complexity of this algorithm comes down to O (V + E l o g V).
Q #3) Is Dijkstra a greedy algorithm?
Answer: Yes, Dijkstra is a greedy algorithm. Similar to Prim’s algorithm of finding the minimum spanning tree (MST) these algorithms also start from a root vertex and always chooses the most optimal vertex with the minimum path.
Q #4) Is Dijkstra DFS or BFS?
Answer: It is neither.. We use this algorithm to find the shortest path from the root node to the other nodes in the graph or a tree.
We usually implement Dijkstra’s algorithm using a Priority queue as we have to find the minimum path. We can also implement this algorithm using the adjacency matrix. We have discussed both these approaches in this tutorial.
We hope you will find this tutorial helpful.
=> Visit Here To See The Java Training Series For All | https://www.softwaretestinghelp.com/dijkstras-algorithm-in-java/ | CC-MAIN-2021-17 | refinedweb | 1,730 | 63.9 |
Bangladesh 🇧🇩
Get import and export customs regulation before travelling to Bangladesh. Items allowed to import are 0. Prohibited items are Narcotics and psychotropic substances. Some of restricted items are Plants and their derivative products require an import permit issued by the National Plant Quarantine Authority. Bangladesh is part of Asia with main city at
Dhaka. Its Least Developed country with a
population of 161M people. The main currency is Taka. The languages
spoken are Bengali.
👍 Least Developed 👨👩👦👦 161M people
chevron_left
import
export
Useful Information
Find other useful infromation when you are travelling to other country like visa details, embasssies, customs, health regulations and so on. | https://visalist.io/bangladesh/customs | CC-MAIN-2020-24 | refinedweb | 105 | 51.14 |
A Pattern of Misunderstanding: Fetishizing the Gang of Four
Over the last few weeks, after reading the first two articles in Phil Crow's "Perl Design Patterns" series on Perl.com, (1, 2) I've been troubled by a feeling that I couldn't quite put my finger on.
Finding myself with some unexpectedly-free time a few days ago (which is to say, while the blackout kept me offline) I paged through some of my patterns books (by candlelight) and thought about what had made me uncomfortable..
What are Software Patterns?
In an attempt to refine a clearer definition of the software patterns field, I revisited a handful of books by different authors:
- Design Patterns: Elements of Reusable Object-Oriented Software (E. Gamma et al, aka "the Gang of Four")
- Analysis Patterns: Reusable Object Models (M. Fowler)
- AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis (W. Brown et al)
- A System of Patterns: Pattern-Oriented Software Architecture (F. Buschmann et al)
- Smalltalk Best Practice Patterns (K. Beck)
Although there's a fair degree of variation between the styles and subjects of these books, there did seem to be a handful of areas of consensus; I identified five shared characteristics, explained below along with some quotes from the original materials:
Patterns are A Type Of Documentation: As used in the software methodology field, patterns are a form of best-practices documentation. Their intent is to communicate the observations, analysis, and recommendations of experienced developers to others in an effective way.
"Patterns and pattern languages are ways to describe best practices, good designs, and capture experience in a way that it is possible for others to reuse this experience."
The Hillside Group, hillside.net/patterns
."
J. Coplien, hillside.net/patterns/definition.html
"A pattern is a decision an expert makes over and over. For example, naming an instance variable after the role it plays is a pattern. Even though all the results (in this case, all the variable names) are different, they have a quality that is constant."
K. Beck, Smalltalk Best Practice Patterns, Preface
Patterns are Contextual Solutions:.)
"Each pattern is a three-part rule, which expresses a relation between a certain context, a problem, and a solution."
C. Alexander, The Timeless Way of Building, p. 247
"A pattern for software architecture describes a particular recurring design problem that arises in specific design contexts, and presents a well-proven generic scheme for its solution."
F. Buschmann et al, A System of Patterns, p. 8
"This is a book of design patterns that describes simple and elegant solutions to specific problems in object oriented software design. "
E. Gamma et al, Design Patterns, Preface
Patterns are Both Observations and Recommendations:.
"The pattern is...."
C. Alexander, The Timeless Way of Building, p. 247
"Design patterns capture solutions that have developed and evolved over time... in a succinct and easily applied form."
E. Gamma et al, Design Patterns, Preface
Patterns are Interconnected:.
"Patterns do not exist in isolation -- there are many interdependencies between them... A pattern system ties its constituent patterns together. It describes how the patterns are connected and how they complement each other."
F. Buschmann et al, A System of Patterns, p. 360
"There is a structure on the patterns, which describes how each pattern is itself a pattern of other smaller patterns. And there also rules, embedded in the patterns, which describe the way that they... must be arranged with respect to other patterns."
C. Alexander, The Timeless Way of Building, p. 185
Patterns are Distinctly Named: $target->add_observer( $self ) is trying to accomplish.
."
E. Gamma et al, Design Patterns, p. 3
What Software Patterns Aren't
Sadly, the idea of software patterns was popularized in a superficial way, such that many people have heard of them but lack a solid understanding of what they are. Thus, as originally pointed out by Mark Jason Dominus, what many people think are design patterns, aren't.
Inverting the consensus definition I came up with earlier, we can state that:
Patterns aren't Language Features:.
Patterns aren't Universally Applicable:.
Patterns aren't Platonic Ideals: Patterns are real-world observations from a particular development community. The patterns described by the Gang of Four aren't free-floating abstractions that somehow exist outside of time and space and were waiting for us to come and discover them; they're practical responses to various day-to-day challenges, including the limitations of C++. An attempt to document Perl Design Patterns should be looking at the best practices of the Perl development community, not just the parts that resemble a ten-year-old collection of patterns from a different language.
Patterns aren't Total Solutions:.
Patterns aren't Generic Concepts:.
Patterns Mistaken
Re-reading the Perl.com articles with these ideas in mind, I was struck by the differences between the articles and the book they critique.
Here are three examples of confusing or contradictory references to patterns, taken just from the Iterators section of the first article. (Please understand that I don't mean to pick on the author of the Perl.com articles; such confusion is widespread and this is far from the worst offense.)
"If a pattern is really valuable, then it should be part of the core language." On the contrary, software patterns typically aren't "part of" a programming language, and it's often not clear what that would even mean. For example, Template Method is a pretty valuable pattern, but how would you make it "a part of Perl" in some concrete way?
"The inclusion of iteration as a core concept represents Perl design at its finest... Perl incorporates this pattern into the core of the language.".
"To see that foreach fully implements the iterator pattern, even for user-defined modules, consider an example from CPAN: XML::DOM." To see an instance of what the Gang of Four meant by "the iterator pattern," consider another example from CPAN: DBI. Sure, there are times when you can simply call $sth->fetchall_arrayref() to put all of the matching rows into a list, but in some circumstances you want an iterator object (aka a "cursor") that lets you loop through the records one by one with while ( @row = $sth->fetchrow_array ) { ... }. That's clearly an instance of the "Use an Iterator Object" pattern rather than the "Return a List" pattern.
More broadly, the idea of describing best practices in a general, reusable way seems to have been lost somewhere along the way. For example, I would expect a presentation of the Template Method pattern to show how the master method defined in the superclass calls different internal methods depending on the subclass of an individual object. Having the subclass insert the methods into the superclass' namespace, such that you can not define different behaviors for different subclasses, is not a widely reusable approach, and is certainly not the type of functionality envisioned by the Gang of Four Template Method pattern.
Towards a Perl Iterator Pattern
So if software patterns, such as using iterator objects, do play a role in Perl, albeit a reduced one due to its dynamic nature and widespread use of the other patterns like "Return a List," what might a description of such a Perl pattern look like? the PerlDesignPatterns IteratorInterface page for a further effort in this direction.
NAME
Iterator Pattern - Use a separate entity to facilitate a loop.
PROBLEM
* A complex data structure may need to allow looping over its contents.
* There are several types of data structures that you need to loop over.
* You might need to loop over a data structure in different ways.
* You might be fetching data elements incrementally to avoid memory bloat.
SOLUTION
Create a new entity which is responsible for selecting the elements in order. Modify your loop code to create the proper iterator, then request the elements from it.
RELATED PATTERNS
If your collection is not particularly large or complex, you might prefer to "Return Aggregates as Lists".
DISCUSSION
There are many ways of implementing this pattern in Perl.
The iterator can be a closure or a full-blown class hierarchy.
The iterator's interface can be "active" or "passive":
* active or external iterator: allows the caller to step through the loop and perform its operation at each step;
* passive or internal iterator: controls the loop and allows the client to pass in an operation to be performed on the items.
EXAMPLES
For example, consider the following code, which uses a foreach loop to print out the keys and values in a hash:sub hash_pair_printer { my $hash = shift; foreach my $key (keys %$hash) { print "$key\t$hash{$key}\n"; } } # The normal use we intended to support my %hash = ( foo => 'Foozle', bar => 'Bozzle' ); hash_pair_printer( \%hash );.
External Iterator: It's easy to create an external iterator using a closure ) );In return for the added complexity of having separated the iterator into a separate entity, we get the flexibility to reuse this code in new ways:my $dbh = DBI->connect( ... ); my $sth = $dbh->prepare('select id, name from students'); pair_printer( sub { $sth->fetchrow_array } );
Internal Iterator: It's also easy to create an internal iterator );In return for the added complexity of having separated the iterator into a separate entity, we get the flexibility to reuse this code in new ways:my $headers = HTTP::Headers->new( ... ); $headers->scan( \&pair_printer );
Every Perl hash is equipped with its own built-in iterator, accessible through each %hash, but its use is somewhat prone to failure; for example, the following attempt to add an iteration count accidentally causes an infinite loop instead because calling keys also resets the iterator used by each.sub hash_pair_printer { my $hash = shift; while ( my ($key, $value) = each %$hash) { print "$key\t$value\n"; warn "Iteration ", ++ $counter, " of ", scalar keys %$hash; } }
What Next for Perl Patterns?
Although discussions of software patterns in the context of Perl have come up regularly over the last handful of years, they haven't seemd to play a significant role in the way most people approach Perl programming..
In fact, the documentation practices of the Perl community seem to share some characteristics with patterns work, with web sites like this one, and books like "Effective Perl Programming" and the "Perl Cookbook", organized as collections of contextual solutions.
I believe that there's value in documentation efforts to collect good Perl practices across a range of scales and organized along a patterns model, and I'd like to imagine that such a Perl Patterns Repository could serve as a useful community resource, operating as a kind of FAQ for information about Perl developer techniques, designs, and idioms.
However,.
Regardless of whether patterns documentation becomes trendy or not in the Perl community, I have been seeing more comments indicating curious interest or quiet enthusiasm for the topic, and I think we would be best served by remaining mindful of the original intent of the software patterns movement.
Further Reading
Perl Pattern Collections
* -- A large but somewhat jumbled collection with some nice advanced tricks.
* -- An older site; seems to be down.
* panix.com/~ziggy/perl_patterns.html -- Lists some Gang of Four patterns implemented in Perl.
More About Patterns
* perl.plover.com/yak/design -- The "Design Patterns Aren't" presentation.
* -- An article about applying patterns.
* -- A presentation on patterns in dynamic languages like Perl.
* hillside.net/patterns -- The industry association of the patterns community.
Some Perl Monks Threads
edited by ybiC: fixed "logout" PM links
* Design Patterns Considered Harmful (Dec 20, 2001)
* Are design patterns worth it? (Aug 28, 2002)
* Perl Design Patterns Book (Oct 03, 2002)
* Perl Design Patterns (Jun 14, 2003)
edited by ybiC: fixed "logout" PM links | https://www.perlmonks.org/index.pl/?node_id=285065;replies=1;displaytype=print | CC-MAIN-2020-24 | refinedweb | 1,934 | 51.89 |
[Solved] Qt4: Connect toolbar buttons to the slot of a promoted widget
Hi, I am using Qt 4.8.5 with MSVC2010 compiler and windows 7.
I've successfully promoted one of the QWidget to my custom class using designer in Qt creator.
Suppose the custom class looks like this:
@class CustomClass: public QWidget
{
...
private slots:
void mySlot();
...
}@
and suppose the QWidget that is promoted to CustomClass is named :
PromotedQWidget
Now I attempt to connect one of the toolbar button to this promoted widget.
In my mainwindow.cpp, I wroted this code:
@#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "customclass.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
connect(ui->actionToolBarButton, SIGNAL(triggered()), ui->PromotedQWidget, SLOT(mySlot()));
}
MainWindow::~MainWindow()
{
delete ui;
}@
Although the auto-completion shows that mySlot exists behind ui->PromotedQWidget,
I found this connect doe not work. When I clicked the actionToolBarButton, nothing really happened.
I think there might be something wrong in this way of connection, but I don't know where to start.
Could someone please give me a hint? Thanks in advance.
Hi,
If the connection fails, you should have an informative message on your console.
Did you add Q_OBJECT to your CustomClass ?
Also, why make the slot private since you want to connect it from outside ?
Thank you SGalst. I got this message: Object::connect: No such slot QCustomPlot::mySlot()
Also I found it could be the problem of Q_OBJECT.
My CustomClass doesn't have Q_OBJECT macro. Actually it inherits from another CustomBaseClass which has Q_OBJECT macro and I think it should be inherited too. The weird thing is, after I add Q_OBJECT to CustomClass, the compiler shows error lnk2001 unresolved external symbol.
I'll look into it. Thanks for your response, it's very informative to me.
You need to put the macro each time you are going to use signals and slots in a QObject derived class. It will allow to create the meta object data needed by Qt to do its magic.
Is the slot implemented somewhere ?
The slot is declared in customclass.h, and implemented in customclass.cpp.
I am still fixing the problem of Q_OBJECT. I cannot put the macro in my CustomClass , which is derived from CustomBaseClass, and the CustomBaseClass is derived from QWidget. In other words, the CustomBaseClass can use Q_OBJECT but the derived CustomClass cannot
Why can't it ?
Thank you for the response, I've fixed it!
Some other slots of the base class are implemented elsewhere. I am restructuring my porgram, its too painful for debugging! | https://forum.qt.io/topic/37664/solved-qt4-connect-toolbar-buttons-to-the-slot-of-a-promoted-widget | CC-MAIN-2017-51 | refinedweb | 428 | 60.31 |
> > readX_check(dev,vaddr)> > Read a register of the device mapped to vaddr, and check errors> > if possible(This is depending on its architecture. In the case of> > ia64, we can generate a MCA from an error by simple operation to> > test the read data.)> > If any error happen on the recoverable region, set the error flag.>> I really don't think we want another readX variant. Do we then also> add readX_check_relaxed()? Can't we just pretend the MCA is asynchronous> on ia64? I'm sure we'd get better performance.Hmm.. I wonder if we could get away with not having a new readX interface by registering each PCI resource either at driver init time or in arch code with the MCA hander. Then we could just make the read routines use the variable that was just read to try to flush out the MCA (there may be better ways to do this). E.g.arch_pci_scan(){ ... for_each_pci_resource(dev, res) { check_region(res); } ...}...unsigned char readb(unsigned long addr){ unsigned char val = *(volatile unsigned char *)addr;#ifdef CONFIG_PCI_CHECK /* try to flush out the MCA by doing something with val */#endif return val;}...Then presumably the MCA error handler would see that an MCA occurred in a region registered during PCI initialization and return an error for pci_read_errors(dev);Jesse-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2004/4/6/164 | CC-MAIN-2016-07 | refinedweb | 246 | 62.48 |
Hello, I am trying to call explicitly implemented interface method, but get such error undefined method `bar' for #<App::Cls:0x000005c> C# code: public interface IFoo { void Bar(); } public class Cls : IFoo { void IFoo.Bar() { } } Ruby code: x = App::Cls.new x.bar But this one works fine: public class Cls : IFoo { public void Bar() { } } What is wrong? - Alex
on 2009-01-08 12:25
on 2009-01-08 17:56
Filed bug #23494. Tomas
on 2009-01-08 18:18
I'm not convinced this is exactly a bug. There is no public 'bar' method on Cls after all. I think IronPython would require you to say something like App::IFoo::bar(App::Cls.new()) in order to make the interface call explicit.
on 2009-01-08 18:53
Curious, how it is expected to work (or nobody thinked it over yet?) Example to consider: - - - - public interface IFoo1 { void Bar(); } public interface IFoo2 { void Bar(); } public class Cls : IFoo1, IFoo2 { void IFoo1.Bar() {} void IFoo2.Bar() {} public void Bar() {} } - - - - C# approach: - - - - Cls obj = new Cls(); obj.Bar(); ((IFoo1)obj).Bar(); ((IFoo2)obj).Bar(); - - - - What is for IronRuby? Thanks, - Alex
on 2009-01-08 18:54
Fair enough, it could be called a feature :) There needs to be some way how to call the method. Tomas
on 2009-01-08 20:02
Not yet. obj.as(IFoo1).Bar() might be one option. Tomas
on 2009-01-08 20:12
In IronPython, we went back and forth over this. On the one hand, the API author might have chosen to make it harder to access the method, requiring a cast like this: App::Cls cls = new App::Cls(); cls.bar(); // syntax error IFoo ifoo = (IFoo)cls; ifoo.bar(); // works OTOH, dynamic languages have no type for variables, and so you don't know if it should be an error or not. ie. How would IronRuby know if the programmer expects a variable to be of type App::Cls or of type IFoo. IronPython ended up allowing it as long as there was no clash. If there is a clash, you have to use App::IFoo(cls).bar(). Thanks, Shri | http://www.ruby-forum.com/topic/175168 | CC-MAIN-2018-39 | refinedweb | 358 | 83.36 |
How To Virtualize Your Development Process Using Docker and Vagrant.
Join the DZone community and get the full member experience.Join For Free
Introduction
Hello guys, my name is Andrii Dvoiak and I'm a full stack web developer at Ukrainian startup called Preply.com.
Preply is a marketplace for tutoring. The platform where you can easily find a personal professional tutor based on your location and needs. We have more then 15 thousands tutors of 40 different subjects.
We have been growing pretty fast for the last year in terms of customers and also our team,
so we decided to push our development process to the next level. First of all, we decided to organize and standardize our development environment so we can easily onboard new dev team members.
We spent a lot of time researching the best practices and talking to other teams to understand what is the most efficient way to organize sharable development environments especially when your product is decoupled into many microservices.
DISCLAIMER: Yes, we are big fans of microservices and Docker technology.
So, we ended up with two main technologies: Vagrant and Docker.
You can read about all aspects and differences from creators of these services here on StackOverflow.
In this guide, we will show you how to Dockerize your application for the very first time so the you can easily share and deploy it on any machine which supports Docker. Also, we will discuss how to run Docker almost anywhere using Vagrant.
Note: We think that Vagrant is overhead in this stack and you only need to use it on Windows or other platforms which don't support Docker natively. But there are a few benefits from Vagrant; we will discuss them in the second part of this article.
By the end of the day, you will have your Docker container ready to deploy on the production server and development VM in the same way.
Before We Start
We are going to deal with a simple Django app with a PostgreSQL Database and Redis as a broker for Celery Tasks.
Also, we are using Supervisor to run our Gunicorn server.
Docker Compose technology will help us to orchestrate our multi-container application.
Note that Compose 1.5.1 requires Docker 1.8.0 or later.
This will help us to run Django app, PostgreSQL, Redis Server, and Celery Worker in separated containers and link them between each other.
To make it all happen, we only need to create a few files in the root directory of our Django project (next to manage.py file):
Dockerfile- (to build an image and push it to DockerHub)
redeploy.sh- (to make redeploy in both DEV and PROD environments)
docker-compose.yml- (to orchestrate all containers)
Vagrantfile- (to provision a virtual machine for development environment)
We are using environment variable RUN_ENV to specify current environment. You type
export RUN_ENV=PROD on production server and
export RUN_ENV=DEV on your local machine or Vagrant VM.
Docker
Introduction
As yo
u may know, there are two main things in Docker: images and containers.
We are going to create an image based on Ubuntu with installed Python, PIP and other Tools needed to run the Django app. Also, this basic image will contain all you requirements pre-installed. We will push this image to the public repository in DockerHub. Note that this image doesn't contain any of your project files!
We all agreed to keep our image on DockerHub always up-to-date. From that image, we are going to run a container exposing specific ports and mounting your local directory with the project to some folder inside the container. This means that all your project files will be accessible within the container. No need to copy files!
Note: this is very useful for the development process because you can change your files and they will instantly change in running Docker container, but it is unacceptable in the production environment.
Once we run a container, we will start supervisor with our Gunicorn server.
Once we want to do redeploy, we will pull new data from GitHub, stop and remove the existing container and run a brand-new container! Magic!
Let's start!
Docker and Docker-Compose Installation
Before we start, we need to install Docker and Docker-Compose on our local computer or server.
Here is the code to do it on Ubuntu 14.04:
sudo -i echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections curl -sSL | sh curl -L-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose usermod -aG docker ubuntu sudo reboot
where ubuntu is your current user
If your current development environment is not Ubuntu 14.04, you'd do better to use Vagrant
to create this environment. Start from the Vagrant section of this tutorial.
Docker Image
First off, to prepare the project for deployment with Docker, we need to build an image
with only Python, PIP, and some pip requirements needed to run Django.
Lets create a new dockerfile called
Dockerfile in rhe root directory of the project.
It will look like this:
FROM ubuntu:14.04 MAINTAINER Andrii Dvoiak RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update RUN apt-get install -y python-pip python-dev python-lxml libxml2-dev libxslt1-dev libxslt-dev libpq-dev zlib1g-dev && apt-get build-dep -y python-lxml && apt-get clean # Specify your own RUN commands here (e.g. RUN apt-get install -y nano) ADD requirements.txt requirements.txt RUN pip install -r requirements.txt WORKDIR /project EXPOSE 80
You can specify your own RUN commands, for example, to install other needed tools
Then you need to build an image from this, using the command:
docker build -t username/image .
(where 'username' - your Dockerhub username and 'image' - name of your new image for this project)
When you successfully built an image, you should push it to your DockerHub cloud:
docker push username/image
And, don't worry, this image doesn't have any information about your project (besides requiremets.txt file).
Note: if you use the private DockerHub repository, be sure to execute
docker login before pushing and pulling images.
Orchestrating Containers
In our current stack, we have to start and run at least four containers with the Redis server, PostgreSQL, Celery, and Django in a specific order and the best way to do it is to use some orchestration tools.
To make this process simple we are going to use Docker Compose technology which allows us to create a simple YML file with instructions of what containers to run and how to link them to each other.
Let's create this magic file and call it by default
docker-compose.yml:
django: image: username/image:latest command: python manage.py supervisor environment: RUN_ENV: "$RUN_ENV" ports: - "80:8001" volumes: - .:/project links: - redis - postgres restart: unless-stopped celery_worker: image: username/image:latest command: python manage.py celery worker -l info links: - postgres - redis restart: unless-stopped postgres: image: postgres:9.1 volumes: - local_postgres:/var/lib/postgresql/data ports: - "5432:5432" environment: POSTGRES_PASSWORD: "$POSTGRES_PASSWORD" POSTGRES_USER: "$POSTGRES_USER" redis: image: redis:latest command: redis-server --appendonly yes
As you can see, we are going to run four projects called
django,
celery_worker,
postgres, and
redis. These names are important for us.
First, it will pull the Redis image from dockerhub and run the container from it.
Second, it will pull the Postgres image and run the container with mounted data from volume
local_postgres. (We will discuss how to create persistent volumes in the next section of this article).
Then, it will start the container with our Django app, forward 8001 port from inside to 80 outside,
mount your current directory straight to
/project folder inside container, link it with Redis and Postgres containers, and run a supervisor. And, last but not least is our container with celery worker, which is also linked with Postgres and Redis.
You can forward any number of ports if needed—just add new lines to links section.
Also, you can link any number of containers, for example, container with your database.
Use tag
:latest for automatic checking for updated image on DockerHub
Important!
If you named the project with Redis Server in YML file
redis , you need to specify
redis instead of
localhost in your settings.py to allow your Django app to connect to Redis:
REDIS_HOST = "redis" BROKER_URL = "redis"
The same with Postgres database: use
postgres instead of
localhost.
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'database_name', 'USER': os.getenv('DATABASE_USER', ''), 'PASSWORD': os.getenv('DATABASE_PASSWORD', ''), 'HOST': 'postgres', 'PORT': '5432', } }
Redeploy Script
Let's create re-deploy script to do redeploy in one click (let's call it
redeploy.sh):
#!/bin/sh if [ -z "$RUN_ENV" ]; then echo 'Please set up RUN_ENV variable' exit 1 fi if [ "$RUN_ENV" = "PROD" ]; then git pull fi docker-compose stop docker-compose rm -f docker-compose up -d
Let's check what it does:
- It checks if variable RUN_ENV is set and exit if it doesn't
- If RUN_ENV is set to PROD, it will do 'git pull' command to get a new version of your project
- It will stop all projects, specified in
docker-compose.ymlfile
- It will remove all existing containers
- It will start new containers
So, it looks pretty simple: to do re-deploy you just need to run
./redeploy.sh !
Do not forget to grant rights for execution(chmod +x redeploy.sh) for this script.
To make a quick redeploy use this command:
docker-compose up --no-deps -d django
But, we don't know how it will start working inside the container. Let's take a look in the next section!
Deploying Inside the Container
We just need to run supervisor to actually start our service inside the container:
python manage.py supervisor
If you don't use supervisor, you can just start your server instead of supervisor.
If you use supervisor let's take a look in the 'supervisord.conf' file:
[supervisord] environment=C_FORCE_ROOT="1" [program:__defaults__] redirect_stderr=true startsecs=10 autorestart=true [program:gunicorn_server] command=gunicorn -w 4 -b 0.0.0.0:8001 YourApp.wsgi:application directory={{ PROJECT_DIR }} stdout_logfile={{ PROJECT_DIR }}/gunicorn.log
So, the supervisor will start your Gunicorn server with 4 workers and bind on 8001 port.
Development Process
- To access your local server you just go to
- To redeploy local changes do
./redeploy.sh
- To see all logs of your project run:
docker-compose logs
- To quickly redeploy changes use:
docker-compose restart django
- To connect to Django shell just do (if you have running container called CONTAINER):
and run this command to create initial superuser:
docker exec -it CONTAINER python manage.py shell
from django.contrib.auth.models import User; User.objects.create_superuser('admin', 'admin@example.com', 'admin')
- To make migrations:
docker exec -it CONTAINER python manage.py schemamigration blabla --auto
- Or you can connect to bash inside the container:
docker exec -it CONTAINER /bin/bash
- You can dump your local database to .json file (you can specify a table to dump):
docker exec -it CONTAINER python manage.py dumpdata > testdb.json
- Or you can load data to your database from file:
docker exec -it CONTAINER python manage.py loaddata testdb.json
- Use this command to monitor status of your running containers:
docker stats $(docker ps -q)
- Use this command to delete all stopped containers:
docker rm -v `docker ps -a -q -f status=exited`
Where CONTAINER is supposed to be
project_django_1
You can play with your containers as you want. Here is a useful Docker cheat sheet.
Vagrant
Intro
The only reason we use Vagrant is to be able to create an isolated development environment based on a virtual machine with all the necessary services to reproduce a real production environment. In other words, just to be able to run Docker.
This environment can be easily and quickly installed to any machine of your co-workers. To do this you only need to install Vagrant and VirtualBox (or another provider) to the local machine.
As we mentioned before, Vagrant could be overhead but we still wanted to cover this part of the process.
Vagrantfile
To set up a virtual machine, we need to create a file called
Vagrantfile in the root dir of our project.
This is a good example of Vagrantfile:
# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64" # We are going to run docker container on port 80 inside vagrant and expose it to port 8000 outside vagrant config.vm.network :forwarded_port, guest: 80, host: 8000 #You can forward any number of ports: #config.vm.network :forwarded_port, guest: 5555, host: 5555 # All files from current directory will be available in /project directory inside vagrant config.vm.synced_folder ".", "/project" # We are going to give VM 1/4 system memory & access to all cpu cores on the host config.vm.provider "virtualbox" do |vb| host = RbConfig::CONFIG['host_os'] if host =~ /darwin/ cpus = `sysctl -n hw.ncpu`.to_i mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4 elsif host =~ /linux/ cpus = `nproc`.to_i mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / 4 else cpus = 2 mem = 1024 end vb.customize ["modifyvm", :id, "--memory", mem] vb.customize ["modifyvm", :id, "--cpus", cpus] end config.vm.provision "shell", inline: <<-SHELL # Install docker and docker compose sudo -i echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections curl -sSL | sh usermod -aG docker vagrant curl -L-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose SHELL end
We are going to use this file to start our virtual machine by typing this command:
vagrant up --provision
If you see the warning about new version of your initial box just do:
vagrant box update
Let's look closer what it does:
- It will pull an image of ubuntu/trusty64 operation system from vagrant repository
- It will expose 80 from inside the machine to 8000 outside
- It will mount your current directory to '/project' directory inside the machine
- It will give 1/4 system memory & access to all CPU cores from the virtual machine
- It will install Docker and Docker-Compose inside the virtual machine
This is it. Now you just need to go inside this VM by typing:
vagrant ssh
Let's check if everything is properly installed:
docker --version docker-compose --version
If not, try to install it manually! Docker and Docker-Compose
Let's go to see our project dir inside the VM and set environment variable RUN_ENV to DEV:
cd /project export RUN_ENV=DEV
Now you can do local redeploy as simple as:
./redeploy.sh
Enjoy your local server on
Closing Down Virtual Machine
You can use three different commands to stop working with your VM:
vagrant suspend # - to freeze your VM with saved RAM vagrant halt # - to stop your VM but save all files vagrant destroy # - to fully delete your virtual machine!
Additional Information
Creating Local Database in Container
Preparation
We assume that you use Docker 1.9 and Docker-Compose 1.5.1 or later.
It was a mess with volumes in Docker before ver. 1.9. And, now Docker introduced Volume!
You can create volume independently from containers and images. It allows you to have persistent volume accessible from any number of mounted containers and keeps data safe even when containers are dead.
It's very simple. To list all your local volumes do:
docker volume ls
To inspect the volume:
docker volume inspect <volume_name>
To create a new volume:
docker volume create --name=<volume_name>
And, to delete a volume just do:
docker volume rm <volume_name>
TIP: Now you can remove old volumes created by some containers in the past.
Creating Local Database
We are going to run a Docker container with PostgreSQL and mount it to the already created Docker Volume.
Let's create a local volume for our database:
docker volume create --name=local_postgres
We are going to use the temporary file
docker-compose.database.yml :
postgres: image: postgres:9.1 container_name: some-postgres volumes: - local_postgres:/var/lib/postgresql/data ports: - "5432:5432" environment: POSTGRES_PASSWORD: "$POSTGRES_PASSWORD" POSTGRES_USER: "$POSTGRES_USER"
And, run a container with PostgreSQL:
docker-compose -f docker-compose.database.yml up -d
Now you have an empty database running in the container and accessible on
localhost:5432 (or another host if you run Docker on Mac).
You can connect to this container:
docker exec -it some-postgres /bin/bash
Switch to the right user:
su postgres
Run
PSQL:
psql
And, for example, create a new database:
CREATE DATABASE test;
To exit
PSQL type:
\q
You can easily remove this container by:
docker-compose -f docker-compose.database.yml stop docker-compose -f docker-compose.database.yml rm -f
All created data will be stored in the
local_postgres volume. Next time you need a database, you can run a new container and you will already have your
test database created.
Dumping Local Database to File
Let's say you want to dump data from your local database. You need to run a simple container and mount your local directory to some directory in the container.
Then you go inside the container and dump data to file in this mounted directory.
Let's do it:
postgres: image: postgres:9.1 container_name: some-postgres volumes: - local_postgres:/var/lib/postgresql/data - .:/data ports: - "5432:5432" environment: POSTGRES_PASSWORD: "$POSTGRES_PASSWORD" POSTGRES_USER: "$POSTGRES_USER"
In this example, we are going to mount our current directory to directory
/data inside the container.
Go inside:
docker exec -it some-postgres /bin/bash su postgres
And dump:
pg_dump --host 'localhost' -U postgres test -f /data/test.out
This command will dump the test database to the file test.out in your current directory.
You can delete this container now.
Use the same technique with mounting directories to load data from file.
Use this command to load data if you are already in the Postgres container and have this
/data/test.out mounted.
psql --host 'localhost' --username postgres test < /data/test.out
Don't forget to create the database
test before this.
Git
When the server is running it will create additional files such as
.log,
.pid, etc.
You don't need them to be committed!
Don't forget to create the .gitignore file:
.idea db.sqlide3 *.pyc *.ini *.log *.pid /static .vagrant/
Static Files
You might have some problems with serving static files in production only with Gunicorn, so check this out.
Don't forget to create an empty folder
/static/ in your project root directory with file
__init__.py to serve static from it.
With this schema, your settings.py file should include:
STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static')
And, to serve static with Gunicorn on production add this to the end of
urls.py:
SERVER_ENVIRONMENT = os.getenv('RUN_ENV', '') if SERVER_ENVIRONMENT == 'PROD': urlpatterns += patterns('', (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.STATIC_ROOT}), )
Otherwise, you may use additional services to serve static files, for example, to use NGINX server.
The End
Please don't be shy to share your experience of organizing the development process in your team
and the best practices of working with Docker and Vagrant.
Drop me some feedback at:
andrii@preply.com or facebook.com/dvoyak
Thank you!
Published at DZone with permission of Andrii Dvoiak . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/virtualization-of-development-process-with-docker | CC-MAIN-2019-47 | refinedweb | 3,251 | 54.52 |
Actors in Java: Coding the Fortune Cookie Application
This week I am presenting a coding of a simple two actor application designed to show actor creation, message passing, and actor termination. This application is called the "Fortune Cookie Application". It is the next state of the art beyond the HelloWorld program. Despite its simplicity this application shows the elements that every actor application uses.
What We Will Be Using
This exercise and all that will follow in the coming months will make use of the Akka actor libraries and microkernel from the Typesafe company. By way of full disclosure at the time of this writing I am not employed by or affiliated in any compensable manner with the Typesafe company. I am a user of their Akka technology and choose it based solely on merit. Additionally we will be coding in Java exclusively, using Java JDK 7 or later for all of our work.
Getting Started
First it is necessary to obtain the Akka library jars that we will use. These can be downloaded free of charge from the Typesafe company at this download site. Please unpack the library jars for Akka into any convenient directory of your choice that has appropriate permissions set to allow read access for the account that will be used for this exercise. Once that is done check the list of Akka 2.2.1 jars below to make sure you have what you need. You will note that some of them are Scala related jars. Because the Akka middleware and microkernel are written in the Scala language it is necessary to retain these jars even though all of our own work will be coded in Java.
Obtaining Source
The next thing to do is to obtain the source for our application. This can be found in two places. You can download the file attachment to this blog post entitled 'FortuneCookie-src.zip'. I have also created an actor learning project and committed the "Fortune Cookie Application" source to it. You can find the project at Learning Actors in Java. The thing to note is that if you obtain source from either the attached file or from the project you will get compile and go bash scripts that were written for my use on my personal workstation which is a Linux platform. So adjust these for your own operating system if you wish to use scripts but also if you prefer to work with an IDE then by all means do so. The compile and go commands shown below in this post are generic javac and java commands. In all cases it is necessary for you to adjust the classpath of commands to reflect where you have deployed the Akka jar files.
Actor Creation
The most common and useful actors are objects instantiated from a class that extends UntypedActor. Actors are organized into actor systems that are hierarchical in structure. Extending UntypedActor creates an actor that is created by an actor system. That actor system is its parent and supervisor. The actor created thus operates at the top level in the hierarchy of the actor system to which it belongs. Its context is the actor system in which it was created. The notion of context is crucial to understanding supervision in actor interaction. The fact that the diner actor is created in the context of a certain actor system makes that actor system its supervisor. There can be more than one actor at the top level within an actor system. See the line below in the Diner.java source.
The actor we have created here models a dining patron who will interact with a waiter. The diner actor is top level and will be superior to a child actor that will model the waiter. Child actors are created by parent actors using methods that configure and manufacture the child actor. See the line below in the source Diner.java.
These method calls are found in all actor applications. A lot is going on here in one statement so let's take it apart to understand everything that's happening. First on the left hand side of the assignment statement we see an ActorRef field. An ActorRef field is a reference to the actor being created. It has a one to one relationship with an actor. It is the only way that one actor has of sending a message to another actor. ActorRefs are immutable, network aware, and serializable. They can be sent in a message and used within other actors.
Next, on the right hand side of the assignment statement is the 'getContext()'. The context being fetched here is not the context of the actor system. It is the context of this actor. The next part of the statement is 'actorOf(....)'. This is the factory method that makes new child actors. This method takes configuration information provided by Props, an actor configuring class. Here in the 'Props.create(....)' part of the statement we will name the class of the child actor to be created. Finally at the end of the statement we see the name "waiter". This provides an identifying display name for the actor that will appear in logs and various debugging mechanisms. It is optional but recommended. I like to name my actor the same as the ActorRef field name.
In summary this actor creation statement will create a child actor of class 'Waiter' to be supervised within the context of its creating parent 'Diner'. Information about it will appear in logs and debug text identified as "waiter".
Sending Messages
Actors have their local state completely sealed from the outside world. The exchange of information that occurs between actors takes place entirely by the exchange of messages. An example of passing a message can be seen in the source Diner.java on the line below.
Communicating a message is very simple. The 'tell(...)' method is executed on the ActorRef field of the actor who is to be the recipient of the message. The tell takes two parameters, the message object and a return address for replies. The normal return address is the sender's self address and that is what is illustrated here.
Receiving Messages
Every actor is required to have the method onReceive() coded. This gives the actor the capability to receive messages sent to its mailbox. An example of this can be found in the source 'Waiter.java'. Look at the lines below taken from this file.
public void onReceive( Object message )
{
In onReceive() it is best to be certain to process every message that enters. The messages will at the grossest level break down into two categories; those that are recognized and those that are not recognized. The processing of recognized messages is of course application specific. The processing of unrecognizable messages is a lot more standard. They are usually published to a facility known as the EventStream which provides functionality whose description is beyond the scope of this post. An example handling unrecognizable messages can be seen in the lines below taken from the source 'Waiter.java'.
Termination
When an actor is stopped the entire hierarchy of actors beneath it is also stopped. This makes for easy leak proof management of actors. An example of actor termination can be found on the line below taken from the file 'Diner.java'.
Conclusion
The 'Fortune Cookie Application' presented here illustrates some fundamental techniques needed by all actor applications. This application in its current form runs entirely with default configuration values for its actor system. Next week we will return to the 'Fortune Cookie Application' to learn about how to custom configure its actor system to meet a variety of purposes and perspectives. See "Actors in Java: Configuring Local Actor Systems" in about a week.
Diner.java is source for the diner actor in the Fortune Cookie application; a simple exercise designed to show basic child creation, messaging, and termination.
***/
import akka.actor.UntypedActor;
import akka.actor.ActorRef;
public class Diner extends UntypedActor
{
private final ActorRef waiter;
public Diner()
{
// Create the waiter actor, a child actor of the parent diner actor.
waiter = getContext().actorOf( Props.create( Waiter.class), "waiter");
}
@Override
public void preStart()
{
// Ask the waiter for a fortune cookie.
waiter.tell( Waiter.Message.ASK_FOR_COOKIE, getSelf());
}
// This is where we listen for messages.
@Override
public void onReceive( Object message )
{
// Reveal the fortune.
System.out.println( message.toString() );
getContext().stop( getSelf());
}
}//end class Diner
Waiter.java is source for the the waiter actor in the Fortune Cookie application.
***/
import java.util.Random;
import java.util.Date;
import java.util.List;
import java.util.Arrays;
public class Waiter extends UntypedActor
{
public static enum Message { ASK_FOR_COOKIE; }
private List
"Your monkey face will earn you many peanuts.",
"You will discover fountain of youth in middle of quick sand pit.",
"You will be safe. Your enemies will all die laughing.",
"The wisdom of our ancient sages will blow past your head like a gentle breeze.",
"Your only friend will soon betray you. This betrayal will bring a new friend.",
"You will be shunned by everyone for fearing to read your fortune cookie.");
@Override
public void onReceive( Object message )
{
if ( message == Message.ASK_FOR_COOKIE )
{
// Reply only to a message we recognize.
Date date = new Date();
Random generator = new Random( date.getTime());
int index = generator.nextInt( fortune_cookies.size());
// Reply to sender with message and self address.
getSender().tell( "\n"+fortune_cookies.get( index )+"\n", getSelf());
}
else
{
// We will publish messages not recognized to the EventStream.
unhandled( message );
}
}
}//end class Waiter
compile
javac -cp "./:path_to_akka_jars/*" Waiter.java Diner.java
go
java -classpath "./:path_to_akka_jars/*" akka.Main Diner
- Login or register to post comments
- Printer-friendly version
- jcmansigian's blog
- 5413 reads | https://weblogs.java.net/blog/jcmansigian/archive/2013/10/22/actors-java-coding-fortune-cookie-application | CC-MAIN-2015-35 | refinedweb | 1,608 | 57.47 |
Some time ago I was helping a customer with a case related to the Windows DNS client. This customer was getting inconsistent name resolution results and he wanted to know why this was happening. The issue that this customer faced was related to the configuration of the DNS servers list in the clients and an incorrect assumption he made about the way that this list is used when a name has to be resolved.
I wanted to start this blog about DNS with a post that tries to clarify some of the concepts related to the use of the DNS servers list and timeouts by the Windows DNS client. In this first part I will describe a sample scenario, a "solution" to a requirement using an incorrect, but common, assumption and the problem with this solution. In the second part I will explain the behavior of the Windows DNS Client when dealing with timeouts.
The sample scenario
The sample environment consists of two Active Directory single-domain forests named contoso.local and fabrikam.local, both in the same location/LAN. These domains are managed by different people, with no trust relationship between them.
- contoso.local has two domain controllers: contosodc1.contoso.local with IP 10.24.8.11 and contosodc2.contoso.local with IP 10.24.8.12. There are also two workstations joined to the domain: client1.contoso.local with IP 10.24.9.101 and client2.contoso.local with IP 10.24.9.105.
- fabrikam.local has two domain controllers: fabrikamdc1.fabrikam.local with IP 10.24.8.31 and fabrikamdc2.fabrikam.local with IP 10.24.8.32. There is a member server in this domain named server1.fabrikam.local with IP 10.24.8.35.
All the DCs are running the DNS Server service with forwarders configured to the ISP. No name resolution is configured between the two domains. None of the internal domains are published on the Internet DNS servers. WINS is not used.
Members in each domain have their local domain's DCs listed as their DNS servers.
The requirement and the "solution"
Users of CLIENT1 and CLIENT2 need to access a share in SERVER1. The administrators decide to create user accounts for these users in fabrikam.local and have them provide these credentials every time they access this share instead of creating a trust relationship. Also, due to political issues between the administrators, neither of the groups wants to forward DNS queries to the other domain or make any changes to their DNS servers that implied contacting the other domain's DNS servers or listing the other domain's servers locally.
Now when the users in CLIENT1 and CLIENT2 need to connect to server1.fabrikam.local they first need to resolve its name to an IP address. This name resolution is not available in the current environment. The "solution" that the administrators in contoso.local decide to use is to list FABRIKAMDC1 and FABRIKAMDC2 as alternate DNS servers in CLIENT1 and CLIENT2 intermixed with their current DNS servers. The configuration of the DNS servers list for these clients is then going to be: CONTOSODC1, FABRIKAMDC1, CONTOSODC2 and FABRIKAMDC2. The desired configuration would look like this:
This solution is based on the (wrong) assumption that if these clients need to resolve server1.fabrikam.local then the secondary DNS server 10.24.8.31 (FABRIKAMDC1) is going to be used to resolve it because the primary DNS server 10.24.8.11 (CONTOSDC1) will not be able to do so.
Does this solution work?
Luckily (I will explain in a little bit why I am using this word) this solution works sometimes, but most of the times it fails. When it fails the clients are going to get a message that the name of SERVER1 cannot be found.
What is wrong with this solution?
The problem with this solution is the wrong assumption that all the DNS servers in the list are going to be queried until the name is found (positive answer) and that the query will fail only when all of the listed DNS servers answer that the name does not exist (negative answer). In other words: the client will try all the possible DNS servers it has configured until it gets a positive answer before it gives up the query and accepts any negative answer.
So what is the actual behavior of the DNS client?
The actual behavior of the DNS client is that it is going to query its DNS servers in the order that they are listed until an answer, either positive or negative, is received. Once an answer is received, either positive or negative, the DNS client stops the query process and gives that answer back to the calling application. Only when a query to a DNS server times-out (or reports a server error) is when the client retries the query with the next DNS server in the list. In other words: negative answers do no trigger retries with alternate DNS servers, only timeouts (and other errors) do.
Note #1: in the Windows DNS Client this behavior of querying servers in order slightly changes after 3 attempts to resolve a query time-out. I will explain this behavior in part 2 of this post.
Note #2: the calling application might have some logic to retry queries that receive a negative response, but this would be outside the knowledge of the DNS Client process.
Think of this concept as: the DNS client is looking for an answer, not for a positive answer. Once the client gets an answer it "trusts" that the DNS server that sent it did its best to get the correct answer and then it stops querying other servers.
In a DNS infrastructure any server should be able to resolve a name as long as the name exists in the DNS namespace. This means that it does not matter the DNS server that you query because you will eventually get the right answer (positive or negative). Multiple DNS servers are configured for fault-tolerance, not because they have access to disjoint DNS namespaces.
NOTE #3: a similar misconception exists about the use of multiple forwarders, but this will be the topic of another blog post.
So why does this solution work sometimes "by luck"?
This solution works when the query asking for server1.fabrikam.local that is sent to CONTOSDC1 times-out and is retried by the client in the secondary server FABRIKAMDC1. In this case FABRIKAMDC1 will be able to immediately answer with the IP of SERVER1.
What could be the cause for CONTOSODC1 to not answer before the client times-out? CONTOSODC1 has to rely on its forwarders to resolve names in fabrikam.local (remember that CONTOSODC1 does not have a copy of the zone fabrikam.local or any resolution to that domain) and the forwarders could be slow to respond, or the link to the ISP could congested, or the forwarders could be wrongly configured, or the forwarders timeout in CONTOSODC1 were set to high values, and a long etcetera.
This solution breaks when another client sends the same query to CONTOSDC1 shortly after the first query was resolved. This is because CONTOSDC1 would have then already cached a negative response for the name and then it can immediately answer to this query from its cache without contacting the forwarders. The solution also breaks when the name is not in the cache of CONTOSODC1 but the process of querying the forwarders allows it to answer (with a negative response) before the client times-out and switches to FABRIKAMDC1.
At the end for this "solution" to work the client has to be "lucky enough" to be querying CONTOSODC1 when the name is not already in the negative cache of this DNS server and the time to get a response back from the forwarders is higher than the client timeout.
If you are still wondering why CONTOSODC1 will always answer with a negative response for queries in fabrikam.local then you have to remember that neither CONTOSODC1 nor any of the Internet DNS servers have any knowledge of domain fabrikam.local, so none of them have any means to positively resolve names in this domain.
Can you show me some traces?
Here are some network traces that show the behavior that I just described:
In the first trace we have the original configuration of CLIENT1 listing only the DNS servers of its own domain and trying to resolve the name server1.fabrikam.local:
As expected both of the DNS servers answered with a negative response (name error) in packets #3 and #4 because none of them can resolve the name. Notice that CLIENT1 resent the query to CONTOSODC2 in packet #2 because the query to CONTOSODC1 in packet #1 timed-out after 1 second (I will explain this timeout value in Part 2). This timeout was probably due to CONTOSODC1 waiting to get an answer from its forwarders but we can see that it eventually sent back an answer in packet #3 (same logic applies for the delayed response from CONTOSODC2 in packet #4). We will use this fact that CONTOSODC1 takes longer than the client timeout to answer back to the client to analyze the next traces.
The next trace shows what happens when we configure CLIENT1 to use the DNS servers of fabrikam.local as alternate DNS servers using the "solution" we previously discussed (assume that the DNS Server service cache in CONTOSODC1 is empty at this point):
Notice how CLIENT1 first tried to resolve the name using CONTOSODC1 in packet #1; after 1 second it timed-out and retried the query using FABRIKAMDC1 in packet #2 who answered almost instantaneously in packet #3. CLIENT1 then got a positive answer indicating the IP address 10.24.8.35 for server1.fabrikam.local (notice the success answer containing this IP in packet #3). Also notice that some seconds later we have packet #4 which is the (delayed) response from CONTOSODC1 indicating that the name cannot be found, but at this point CLIENT1 has already resolved the name and does not need to take care of this answer.
Now what happens if CLIENT2 (who has the DNS servers list configured exactly as CLIENT1) tries to resolve the same name just after CLIENT1 resolved the name as in the previous trace (packets #1 to #4 from the previous trace are included to reinforce the sequence of events):
Notice how in packet #5 CLIENT2 sends the query to its primary DNS server CONTOSODC1. CONTOSODC1 has already cached that the name does not exist (due to the query that CLIENT1 previously sent) and can almost instantaneously answer back to CLIENT2 with a negative answer in packet #6. Notice how CLIENT2 does not retry the query using its secondary DNS server FABRIKAMDC1 because it already got an answer from the primary server.
Note #4: in a future post I will blog about the use of the positive and negative caches in Microsoft DNS Server.
In the next part of this post I will explain how the Windows DNS client deals with timeouts and its retry logic.
I hope that you find this information useful. Please provide your feedback about the content in the comments sections. I have several ideas for the next posts and I would like to hear your comments about topics that might be interesting for you.
Great post!!
Amazing detailed explanation Mr. Karam.
PFE Rocks 🙂
Finally, this is explained in an easy-to-grasp manner. Great post, Karam.
Curious, what are you using for tracing ? Netmon ?
-Alex
super good post! thank you
this is good stuff but only helps people who don't understand how DNS works!
“this is good stuff but only helps people who don’t understand how DNS works!”
Which is a lot of people.
Wonderful, wonderful post.
I have 2 AD integrated DC / DNS
On both the server when i do a nslookup the dns requests get timed out if i try to nslookup back to back again it gets resolved. Even the records on forward lookup zone are timed out.
I have checked all the settings adapter/forwarders/
Please suggest. | https://blogs.technet.microsoft.com/stdqry/2011/12/02/dns-clients-and-timeouts-part-1/ | CC-MAIN-2017-47 | refinedweb | 2,029 | 61.97 |
Flow is something which I mostly ignored till now, recently I created few flow in my project and found them very interesting and easy to develop to handle even complex functionality. So I thought to try my hands on Flow to develop some simple components which will be helpful in daily use.
One of such requirement is Create Data for Testing in sandbox or in dev org. As a developer we have many options but Admin have limited scope. If they need to do bulk testing then they have external help with data creation or they need to spend long time to clone record using CSV and then upload or in worst case need to create manually. So Today we will create a simple component using which they can Clone Record in Bulk using Flow.
I have created a basic apex class for this which will perform required DML operations.
public class GenericFlowHelper { @InvocableMethod(label='create records' description='Create records.') public static List<FlowWrapper> createRecords(List<FlowWrapper> flowData) { List<FlowWrapper> fwList = new List<FlowWrapper>(); Set<ID> recordIDSet = new Set<Id>(); for(FlowWrapper fw : flowData) recordIDSet.add(fw.recordId); String sObjectName = flowData[0].recordId.getsobjecttype().getDescribe().getName(); Map<String,Schema.SObjectField> objfields = Schema.getGlobalDescribe().get(sObjectName).getDescribe().fields.getMap(); String query = 'SELECT '; query += String.join(new List<String>(objfields.KeySet()),','); query += ' FROM '+sObjectName+' WHERE ID IN : recordIDSet'; List<sObject> sobjList = new List<sObject>(); Integer currentRecord = 0; for(Sobject sob : Database.query(query)) { FlowWrapper fw = new FlowWrapper(); for(Integer co = 1 ; co<= flowData[currentRecord].count; co++) { Sobject newObj = sob.clone(false, true, false, false); sobjList.add(newObj); } fw.sObjectList = sobjList; fw.isSuccess = true; fw.recordId = sob.Id; fw.count = flowData[currentRecord].count; fwList.add(fw); currentRecord++; } insert sobjList; return fwList; } public class FlowWrapper { @InvocableVariable(required=true) Public Id recordId; @InvocableVariable(required=true) Public Integer count; @InvocableVariable Public List<sObject> sObjectList; @InvocableVariable Public boolean isSuccess; } }
Code Flow:
In this class I will get RecordId and number of records they want to create, using these details with Apex describe I have made dynamic query and clone the records using clone method.
If you have noticed I have used Generic sObject as attribute, With Spring 20 now it is supported to use Generic sObject as Parameters, so we can use this component with any suppoorted sObject type. Although I haven’t added much validation so if you want you can add more checks in code and can play with it.
Now the Flow is very simple for this.
So in first screen we are getting Inputs from users and Sending them in Apex using Action. So using just two steps we can Clone Record in Bulk Using Flow.
This is how our Final UI will look like
I have created one package, so you can easily install this in your orgs.
This component is fully Dynamic and can be used with any supported sObject.
Now you can officially use URL Hacks in Lightning Experience: Check here
I am exploring flow, so if you want me to implment some use case using flow, let me know in comments.In future I will share similar components using Flow. Happy Programming 🙂 | https://newstechnologystuff.com/2020/03/01/clone-record-in-bulk-using-flow/ | CC-MAIN-2020-34 | refinedweb | 523 | 54.73 |
Pandas
Serious practitioners of data science use the full scientific method, starting with a question and a hypothesis, followed by an exploration of the data to determine whether the hypothesis holds up. But in many cases, such as when you aren't quite sure what your data contains, it helps to perform some exploratory data analysis—just looking around, trying to see if you can find something.
And, that's what I'm going to cover here, using tools provided by the amazing Python ecosystem for data science, sometimes known as the SciPy stack. It's hard to overstate the number of people I've met in the past year or two who are learning Python specifically for data science needs. Back when I was analyzing data for my PhD dissertation, just two years ago, I was told that Python wasn't yet mature enough to do the sorts of things I needed, and that I should use the R language instead. I do have to wonder whether the tables have turned by now; the number of contributors and contributions to the SciPy stack is phenomenal, making it a more compelling platform for data analysis.
In my article "Analyzing Data", I described how to filter through logfiles, turning them into CSV files containing the information that was of interest. Here, I explain how to import that data into Pandas, which provides an additional layer of flexibility and will let you explore the data in all sorts of ways—including graphically. Although I won't necessarily reach any amazing conclusions, you'll at least see how you can import data into Pandas, slice and dice it in various ways, and then produce some basic plots.
Pandas
NumPy is a Python package, downloadable from the Python Package Index(PyPI), which provides a data structure known an a NumPy array. These arrays, although accessible from Python, are mainly implemented in C for maximum speed and efficiency. They also operate on a vector basis, so if you add 1 to a NumPy array, you're adding 1 to every single element in that array. It takes a while to get used to this way of thinking, and to the fact that the array should have a uniform data type.
Now, what can you do with your NumPy array? You could apply any number of functions to it. Fortunately, SciPy has an enormous number of functions defined and available, suitable for nearly every kind of scientific and mathematical investigation you might want to perform.
But in this case, and in many cases in the data science world, what I really want to do is read data from a variety of formats and then explore that data. The perfect tool for that is Pandas, an extensive library designed for data analysis within Python.
The most basic data structure in Pandas is a "series", which is basically a wrapper around a NumPy array. A series can contain any number of elements, all of which should be of the same type for maximum efficiency (and reasonableness). The big deal with a series is that you can set whatever indexes you want, giving you more expressive power than would be possible in a NumPy array. Pandas also provides some additional functionality for series objects in the form of a large number of methods.
But the real powerhouse of Pandas is the "data frame", which is something like an Excel spreadsheet implemented inside of Python. Once you get a table of information inside a data frame, you can perform a wide variety of manipulations and calculations, often working in similar ways to a relational database. Indeed, many of the methods you can invoke on a data frame are similar or identical in name to the operations you can invoke in SQL.
Installing Pandas isn't very difficult, if you have a working Python
installation already. It's easiest to use
pip, the standard Python
installation program, to do so:
sudo pip install -U numpy matplotlib pandas
The above will install a number of different packages, overwriting the existing installation if an older version of a package is installed.
As good as Pandas is, it's even better when it is integrated with the rest of the SciPy stack and inside of the Jupyter (that is, IPython) notebook. You can install this as well:
sudo pip install -U 'jupyter[notebook]'
Don't forget the quotes, which ensure that the shell doesn't try to interpret the square brackets as a form of shell globbing. Now, once you have installed this, run the Jupyter notebook:
jupyter notebook
If all goes well, the shell window should fill with some logfile output. But soon after that, your Web browser will open, giving you a chance (using the menu on the right side of the page) to open a new Python page. The idea is that you'll then interact with this document, entering Python code inside the individual cells, rather than putting them in a file. To execute the code inside a cell, just press Shift-Enter; the cell will execute, and the result of evaluating the final line will be displayed.
Even if I wasn't working in the area of data science, I would find the Jupyter Notebook to be an extremely clean, easy-to-use and convenient way to work with my Python code. It has replaced my use of the text-based Python interactive shell. If nothing else, the fact that I can save and return to cells across sessions means that I spend less time re-creating where I was the previous time I worked on a project.
Inside Jupyter Notebook, you'll want to load NumPy, Pandas and a
variety of related functionality. The easiest way to do so is to use
a combination of Python
import statements and the
%pylab magic
function within the notebook:
%pylab inline import pandas as pd from pandas import Series, DataFrame
The above ensures that everything you'll need is defined. In theory, you
don't need to alias Pandas to
pd, but everyone else in the Pandas
world does so. I must admit that I avoided this alias for some time,
but finally decided that if I want my code to integrate nicely with
other people's projects, I really should follow their conventions.
Reading the CSV
Now let's read the CSV file that I created for my previous article. As you might remember, the file contains a number of columns, separated by tabs, which were created from an Apache logfile. It turns out that CSV, although a seemingly primitive format for exchanging information, is one of the most popular methods for doing so in the data science world. As a result, Pandas provides a variety of functions that let you turn a CSV file into a data frame.
The easiest and most common such function is
read_csv. As you might
expect,
read_csv can be handed a filename as a parameter, which it'll
read and turn into a data frame. But
read_csv, like many other
of the
read_* functions in Pandas, also can take a file object or even
a URL.
I started by trying to read access.csv, the CSV file from
my previous
article, with the
read_csv method:
df = pd.read_csv('access.csv')
Unfortunately, this failed, with a very strange error message,
indicating that different lines of the file contained different
numbers of fields. After a bit of thought and debugging, it turns out
that this error is because the file contains tab-separated values, and
that the default setting of pd.read_csv is to assume comma
separators. So, you can retry your load, passing the
sep parameter:
df = pd.read_csv('access.csv', sep='\t')
And sure enough, that worked! Moreover, if you ask for the keys of the Pandas data frame you have just created, you get the headers as they were defined at the top of the file. You can see those by asking the data frame to show you its keys:
df.keys()
Now, you can think of a data frame as a Python version of an Excel spreadsheet or of a table in a two-dimensional relational database, but you also can think of it as a set of Pandas series objects, with each series providing a particular column.
I should note that
read_csv (and the other
read_* functions in Pandas)
are truly amazing pieces of software. If you're trying to read from a
CSV file and Pandas isn't handling it correctly, you either have
an extremely strange file format, or you haven't found the right
option yet.
Navigating through the Data Frame
Now that you've loaded the CSV file into a data frame, what can you do with it? First, you can ask to see the entire thing, but in the case of this example CSV file, there are more than 27,000 rows, which means that printing it out and looking through it is probably a bad idea. (That said, when you look at a data frame inside Jupyter, you will see only the first few rows and last few rows, making it easier to deal with.)
If you think of your data frame as a spreadsheet, you can look at individual rows, columns and combinations of those.
You can ask for an entire column by using the column (key) name in square brackets or even as an attribute. Thus, you can get all of the requested URLs by asking for the "r" column, as follows:
df['r']
Or like this:
df.r
Of course, this still will result in the printing of a very large number of rows. You can ask for only the first five rows by using Python slice syntax—something that's often quite confusing for people when they start with Pandas, but which becomes natural after a short while. (Remember that using an individual column name inside square brackets produces one column, whereas using a slice inside square brackets produces one or more rows.)
So, to see the first ten rows, you can say:
df[:10]
And of course, if you're interested only in seeing the first ten HTTP requests that came into the server, then you can say:
df.r[:10]
When you ask for a single column from a data frame, you're really getting a Pandas series, with all of its abilities.
One of the things you often will want to do with a data frame is figure
out the most popular data. This is especially true when working with
logfiles, which are supposed to give you some insights into your work.
For example, perhaps you want to find out which URLs were most
popular. You can ask to count all of the rows in
df:
df.count()
This will give you a total of all rows. But, you also can retrieve a single column (which is a Pandas series) and ask it to count the number of times each value appears:
df['r'].value_counts()
The resulting series has indexes that are the values (that is, URLs) themselves and also a count (in descending order) of the number of times each one appeared.
Plotting
This is already great, but you can do even better and plot the results. For example, you might want to have a bar graph indicating how many times each of the top ten URLs was invoked. You can say:
df['r'].value_counts()[:10].plot.bar()
Notice how you take the original data frame, count the number of times
each value appears, take the top ten of those, and then invoke methods
for plotting via Matplotlib, producing a simple, but effective, bar
chart. If you're using Jupyter and invoked
%pylab
inline, this
actually will appear in your browser window, rather than an external
program.
You similarly can make a pie chart:
df['r'].value_counts()[:10].plot.pie()
But wait a second. This chart indicates that the most popular URL by
a long shot was /feed/, a URL used by RSS readers to access my blog.
Although that's flattering, it masks the other data I'm interested in.
You thus can use "boolean indexing" to retrieve a subset of rows from
df and then plot only those rows:
df[~df.r.str.contains('/feed/')]['r'].value_counts()[:10].plot.pie()
Whoa...that looks huge and complicated. Let's break it apart to understand what's going on:
This used boolean indexing to retrieve some rows and get rid of others. The conditions are expressed using a combination of generic Python and NumPy/Pandas-specific syntax and code.
This example used the
str.containsmethod provided by Pandas, which enables you to find all of the rows where the URL contained "/feed/".
Then, the example used the (normally) bitwise operator ~ to invert the logic of what you're trying to find.
Finally, the result is plotted, providing a picture of which URLs were and were not popular.
Reading the data from CSV and into a data frame gives great flexibility in manipulating the data and, eventually, in plotting it.
Conclusion
In this article, I described how to read logfile data into Pandas and even executed a few small plots with it. In a future article, I plan to explain how you can transform data even more to provide insights for everyone interested in the logfile., also is excellent, approaching problems from a different angle. Both are published by O'Reilly, and both are worth reading if you're interested in data science.
To learn more about the Python tools used in data science, check out the sites for NumPy, SciPy, Pandas and IPython. There is a lot to learn, so be prepared for a deep dive and lots of reading.
Pandas is available from, and documented at,.
Python itself is available from here, and the PyPI package index, from which you can download all of the packages mentioned in this article, is here. | https://www.linuxjournal.com/content/pandas | CC-MAIN-2020-29 | refinedweb | 2,326 | 57.91 |
In the previous lessons on integers, we covered that C++ only guarantees that integer variables will have a minimum size -- but they could be larger, depending on the target system.
Why isn’t the size of the integer variables fixed?
The short answer is that this goes back to C, when computers were slow and performance was of the utmost concern. C opted to intentionally leave the size of an integer open so that the compiler implementors could pick a size for int that performs best on the target computer architecture.
Doesn’t this suck?
By modern standards, yes. As a programmer, it’s a little ridiculous to have to deal with types that have uncertain ranges. A program that uses more than the minimum guaranteed ranges might work on one architecture but not on another.:
The fixed-width integers have two downsides: First, they are optional and only exist if there are fundamental types matching their widths and following a certain binary representation. Using a fixed-width integer makes your code less portable, it might not compile on other systems.
Second, if you use a fixed-width integer, it may also be slower than a wider type on some architectures. If you need an integer to hold values from -10 to 20, you might be tempted to use std::int8_t. But your CPU might be better at processing 32 bit wide integers, so you just lost speed by making a restriction to that wasn’t necessary.
std::int8_t
Warning
The above fixed-width integers should be avoided, as they may not be defined on all target architectures.
Fast and least integers
To help address the above downsides, C++ also defines two alternative sets of integers.
The fast type (std::int_fast#_t) provides the fastest signed integer type with a width of at least # bits (where # = 8, 16, 32, or 64). For example, std::int_fast32_t will give you the fastest signed integer type that’s at least 32 bits.
The least type (std::int_least#_t) provides the smallest signed integer type with a width of at least # bits (where # = 8, 16, 32, or 64). For example, std::int_least32_t will give you the smallest signed integer type that’s at least 32 bits.
Here’s an example from the author’s Visual Studio (32-bit console application):
This produced the result:
fast 8: 8 bits
fast 16: 32 bits
fast 32: 32 bits
least 8: 8 bits
least 16: 16 bits
least 32: 32 bits
You can see that std::int_fast16_t was 32 bits, whereas std::int_least16_t was 16 bits.
There is also an unsigned set of fast and least types (std::uint_fast#_t and std::uint_least#_t).
These fast and least types are guaranteed to be defined, and are safe to use.
Best practice
Favor the std::int_fast#_t and std::int_least#_t integers when you need an integer guaranteed to be at least a certain minimum size.
Warning: std::int8_t and std::uint8_t may behave like chars instead of integers
Note: We talk more about chars in lesson (4.11 -- Chars).
Due to an oversight in the C++ specification, most compilers define and treat std::int8_t and std::uint8_t (and the corresponding fast and least fixed-width types) identically to types signed char and unsigned char respectively. Consequently, std::cin and std::cout may work differently than you’re expecting. Here’s a sample program showing this:
On most systems, this program will print ‘A’ (treating myint as a char). However, on some systems, this may print 65 as expected.
For simplicity, it’s best to avoid std::int8_t and std::uint8_t (and the related fast and least types) altogether (use std::int16_t or std::uint16_t instead). However, if you do use std::int8_t or std::uint8_t, you should be careful of anything that would interpret std::int8_t or std::uint8_t as a char instead of an integer (this includes std::cout and std::cin).
Hopefully this will be clarified by a future draft of C++.
Avoid the 8-bit fixed-width integer types. If you do use them, note that they are often treated like chars.
Integer best practices
Now that fixed-width integers have been added to C++, the best practice for integers in C++ is as follows:
Avoid the following if possible:
What is std::size_t?
Consider the following code:
On the author’s machine, this prints:
4
Pretty simple, right? We can infer that operator sizeof returns an integer value -- but what integer type is that value? An int? A short? The answer is that sizeof (and many functions that return a size or length value) return a value of type std::size_t. std::size_t is defined as an unsigned integral type, and it is typically used to represent the size or length of objects.
Amusingly, we can use the sizeof operator (which returns a value of type std::size_t) to ask for the size of std::size_t itself:
Compiled as a 32-bit (4 byte) console app on the author’s system, this prints: wrapping around.
"any object larger than the largest value size_t can hold is considered ill-formed (and will cause a compile error)"
On my machine sizeof(size_t) is 4 bytes.
But isn't "long long" 8 bytes?
Thanks in Advance!
If `std::size_t` is 4 bytes wide, then the largest value it can hold is 4294967295. That's more than 8.
I dont understand what the difference between int_least16_t and uint_fast16_t is. Can you explain what do they both do? I read up something on other websites but still dont understand.
for int_least_16_t
int_least8_t
int_least16_t
int_least32_t
int_least64_t
smallest signed integer type with width of at least 8, 16, 32 and 64 bits respectively
(typedef)
for uint_fast16_t
uint_fast8_t
uint_fast16_t
uint_fast32_t
uint_fast64_t
fastest unsigned integer type with width of at least 8, 16, 32 and 64 bits respectively
(typedef)
But why uint_fast16_t has 32 bits? And when do I use them?
unit_fast16_t is not of 32 bits,it is of 16 bits.
well,i also don't know how to use them
`uint_fast16_t` is at least 16 bits wide, but can be wider if it wants to. If a 32 bit integer is faster, it will be a 32 bit integer. If a 64 bit integer is faster, it will be a 64 bit integer.
`uint_least16_t` is also at least 16 bits wide, and if your compiler supports 16 bit integer, it will be a 16 bit integer. Only if your compiler doesn't support 16 bit integers, it will be wider.
I think it would be helpful for others if you mention:
Fastest signed integer is basically that no. of bits(according to the specific computer) integer that interacts with the computer fastest with atleast specified no. of bits(user given).
Hii
I find this lesson kind of complicated :( is it okay if I don't understand everything fully for now ?
Yes, that's fine
Is it necessary to use the "std" namespace when defining fixed-width integer data type in C++17? For example, I was able to run the following without any issues (might be because I am using visual studio):
Some headers in C++ are backwards compatible with C (which didn't have namespaces), so you can use their contents without `std::`. Since we're not in C, we recommend using the `std::` prefix.
"std::size_t is defined an unsigned integral type, and it is typically used to represent the size or length of objects."
shouldn't it be "std::size_t is defined AS an unsigned integral type.."?
Yep, thanks!
in fixed-width integers, how come does the architecture of CPU no loner matter? what if there would be a limitation on a specific CUP that couldn't support Fixed-width integers?
"A program that uses more than the minimum guaranteed ranges might work on one architecture but not on another."
With these days modern standards, why still is there such limitation?
I mean is this limitation still because there might be some CPU architectures out there that can't support more than the minimum guaranteed ranges and have limitation about that?
There still are, and there will continue to be, systems with limited resources, such as Arduinos or embedded systems.
Thank you so much.
"The fixed-width integers have two downsides: First, they may not be supported on architectures where those types can’t be represented. They may also be less performant than the built-in types on some architectures."
I don't understand what that means. Like x64, x32 or like Mac/Windows? I've been following through all the tutorials but I feel like this one assumes a certain knowledge the average person learning to code may not understand.
hi!
I adjusted the sentence in question to hopefully make it easier to understand.
It's not possible to name an architecture, because the availability of the type is decided by the implementation (Compiler + standard library), not the architecture (Although, they often go hand in hand). Even if you're on a 64 bit system, an implementation might not provide a fundamental integer that is 64 bits wide (Usually `long long` is 64 bits wide, but it can be narrower). If no such integer exists, you don't get the fixed width types for it.
What is "std::uint8_t"? You never ìntroduced what that means.
Thank you
It's in the table: std::uint8_t, 1 byte unsigned integer, treated like an unsigned char on many systems.
In the fixed-width integers section, for what reason are you using direct initialization over the suggested best practice brace/uniform initialization?
Roughly when was brace initialization considered best practice?
The lesson now uses list initialization as recommended, thanks for pointing out the inconsistency!
List initialization was added in C++11 with a fix for `auto` in C++14 (I think). The majority of developers don't use list initialization, so I wouldn't say it is considered best practice generally. List initialization is superior to the other initializations, but not (yet) widely popular.
There are two types of best practices. Those that are superior in method (e.g. they produce better results than than the alternatives), and those that are standard ways of doing things (and doing otherwise would go against expectation).
List initialization is the first kind of best practice.
In the penultimate paragraph, it says "largest number representable by a 4-byte unsigned integer (per the table above, 4,294,967,295 bytes)." I believe the last word in the parenthesis, "bytes", should not be there.
I also want to send a big thank you to you guys for your effort and time put into creating learncpp.com! It's amazing!
Shouldn't be there, removed, thanks for helping improve learncpp!
Can somebody help me clear this doubt:
Suppose I have two computers, computerA, and computerB.
let's say I created test.cpp on both computerA, and computerB.
test.cpp :
I compiled both files seperatly in computerA, and computerB. I named both executable files as cmpA.exe and cmpB.exe, i ran both executables, and the output was:
My question is, What would be the:
Output of cmpA.exe in computerB? (would it still overflow and give undefined value?).
Output of cmpB.exe in computerA?
Assuming both computers are able to understand either binary:
> Output of cmpA.exe in computerB?
Same as on computerA.
> Output of cmpB.exe in computerA?
Same as on computerB.
Types exist only in code (With few exceptions). Once you compile the code into a binary, the types are gone. The instructions in the binary tell the computer to use a 16 bit (2 bytes) register, so that's what the computer will do.
It's likely that your compiler optmized away `intX`, so the overfloat occurred during compile-time already. This means you should get the same undefined number on both computers.
When do we actually need to use size_t?
If you want to store the results of a sizeof() call in a variable, that variable should ideally be of type size_t.
size_t is also used in a lot of places in the standard library. So if you see it in reference documentation, you'll know what it is.
What does < fast > integer function exactly do? For what is it called "fast"?
Why is int8_t treated as char? What do you mean by "due to an oversight”
What does < t > at the end of int8_t and size_t stand for? Does it stand for "type"?
> For what is it called "fast"?
`int*_fast_t` are the fastest integer type that is at least * bits wide, even if that means that the type will resolve to a wider type. If you request a `std::int_fast32_t` but a 64 bit integer is faster on your system, you'll get a 64 bit integer. The "least" types on the contrary will try to prevent being wider than requested.
> int8_t
I don't know what Alex meant by "due to an oversight". `int8_t` is commonly an alias for a `signed char`. Remember that `char` is an integral type.
> What does < t > at the end of int8_t and size_t stand for? Does it stand for "type"?
Yep, "type".
Does the compiler do a test to which type of integer is faster(e.g., 8 bit, 16 bit, etc.)? In other words how does your computer know which type of int is the faster than the other? Thank you :)
The standard library implementation and compiler can work together to figure out what the fastest type is. Taking libstdc++ (The standard library that ships with gcc) as an example, the choice is made using a simple preprocessor macro
libstdc++ unconditionally assumes that `signed char` is the best fit for `int_fast8_t`. For all other types, it uses 32 bit integers if you're compiling as 32 bit (with the exception of `int_fast64_t`, because it can't be 32 bits wide), or 64 bit integers if you're compiling as 64 bit.
As mentioned in the lesson: The fast type (std::int_fast#_t) provides the fastest signed integer type with a width of at least # bits (where # = 8, 16, 32, or 64). For example, std::int_fast32_t will give you the fastest signed integer type that’s at least 32 bits.
Now let's assume the fastest integer 64 bits.In this case
std::cout << sizeof(std::int_fast8_t) * 8;
std::cout << sizeof(std::int_fast16_t) * 8 ;
std::cout << sizeof(std::int_fast32_t) * 8 ;
the output must be : 64 64 64 right??, since the fastest one is 64 bits, but the result in my computer was 8 , 16 , 32 , so which one is the fastest one ?
All the same, according to your standard libraries implementation.
What is the address width of the application? Is it like, the 32-bit architecture will have 32 digits in its address?
Yes, 32 binary digits, aka 32 bits.
You're not getting my point. suppose if we are considering a computer with 8-bit architecture then the address of all memory locations in this computer will be of exactly 8 digits?
Yes, 8 binary digits, aka 8 bits.
0000 1010
1111 0010
1010 0011
Those are addresses in an 8 bit architecture. If you're using another number system, you'll have less digits, eg. in hex an 8 bit architecture has 2 digits.
0x0a
0xf2
0xa3
Same as above, just a different way of writing it.
Thanks!!
Is it possible to create a data type using structs with size more than the range of size_t? If no, then why And why it is a compilation error?
Types can't be larger than the maximum value representable by `std::size_t`.
Hi nascardriver ,,
suppose we create a struct that holds an integer and float, in this case we have a data type(struct)holds a 4bytes integer and 4bytes float totally 8bytes data type.So how this data type is working if we have the maximum value by `std::size_t` 4byte ??
The maximum object size is the maximum _value_, not width, of `std::size_t`. The maximum value of `std::size_t` on a 64 bit system is 2^64 (Power, not xor). That's more than 16 million TiB.
Thank you a lot
Hey, it is mentioned in this tutorial that "The least type (std::int_least#_t) provides the smallest signed integer type with a width of at least # bits". What is mean by the smallest signed integer here? I've searched a lot on Google but I didn't any information beyond this! And how it is different from int#_t since it also uses at least # bits too.
int#_t is exactly # bits wide. int_fast#_t and int_least#_t are at least # bits wide, but can be wider.
int_fast#_t uses the fastest type available.
int_least#_t uses the narrowest type available.
example
int16_t is always a 16 bit integer, but this type might not exist.
int_fast16_t might turn into a 64 bit integer, because that's the fastest on your system.
int_least16_t might turn into a 32 bit integer, because your system doesn't have a 16 bit integer, but 32 is less than 64.
Thanks!
your explanations are helpful a lot.
Suuuper small typo BUT i'll point it out anyway :D.
On the first waning you missed a ' - '
"Warning
The above fixed (you missed me Alex) width integers should..."
Why not make such a good tutorial more perfect. :p
Lesson updated, thanks!
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/fixed-width-integers-and-size-t/ | CC-MAIN-2020-29 | refinedweb | 2,922 | 64.81 |
From: Phil Richards (news_at_[hidden])
Date: 2005-10-20 12:56:45
On 2005-10-20, Deane Yang <deane_yang_at_[hidden]> wrote:
> Phil Richards wrote:
> > I gave up trying to progress with boost-ing it because I couldn't see
> > a path of consensus to what was actually wanted. There seemed to be
> > a desire for more flexibility, and I couldn't see the point :-(
[...]
> > In fact, all I've ever needed is physical4_system (mass, length,
> > time, temperature), and never need to convert units because we
> > always insist on everything being in SI.
> Yes, this was a major issue.
>
> Do we want a general dimensions library (without even predefining ANY
> dimensions at all) that I claim would have extremely broad
> applicability outside physics, or do we want a physical-dimensions-only
> library that predefines a standard or basic collection of physical
> dimensions.?
The difficulty with not having some basics defined is that it makes
things a lot more complicated :-)
SI length is defined as having unit in meters. meters has
dimensionality of length^1, in SI (and Imperial, for that matter).
If you change your basis set for dimensional analysis, then length
may not be a basis dimension in this new basis set. This different
new (derived) definition of "length" dimensionality has to exist in
a different namespace to SI length. And it *can't* interoperate
with it particularly easily - you would have to specify the
transformation of each basis dimension from one family to the other.
Which, funnily enough, is where I got to with my "dimensionality
family" stuff. I was never enormously convinced of my solution, it
has to be said, but it works if all you want to do is define disjoint,
non-interacting, dimensional analysis systems.
(You can define any basis set you like, and call things in it anything
you like (everything is partitioned either by class or namespace scopes)
but it won't let you mix-and-match things of different families.)
> I am strongly in favor of the former, I have also never understood why
> the latter couldn't be built using the former anyway, so the library
> could have two layers to it. The physicists could ignore the lower layer
> completely, and I could ignore the upper one.
Ok, give a few examples of different basis sets that may be wanted :-)
The best thing to drive this forward are real use-cases for the library.
Without them, we'll just go round in circles again.
phil
-- change name before "@" to "phil" for email
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/10/95715.php | CC-MAIN-2021-21 | refinedweb | 442 | 59.94 |
What is RecyclerView?
RecyclerView is advanced version of ListView . It is designed to work with large scale of data with better performance and memory management. Combination of CardView and
RecyclerView will give you more flexibility for designing complex views.
RecyclerView v/s ListView
Let start with RecyclerView in project step by step.
Step 1
Create a new project from File -> New project and fill all necessary details and select “Blank Activity”.
Step 2
Android Studio has introduce a new way to integrate dependency integration in “Gradle”. So to use RecyclerView in your project you require to add gradle dependency.
gradle
Open
build.gradle file and add below libraries in “dependency” tag.
dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:23.1.1' compile 'com.android.support:design:23.1.1' compile 'com.android.support:recyclerview-v7:23.1.1' }
This will add all necessary library files in our project and give a dependency on that. It will download binaries from server of provided version “23.1.1”
Step 3
You are done with dependency integration. Now add
RecyclerView in your layout.
When you create application with “Blank Activity”, it will create
MainActivity.java and
activity_main.xml layout file. Open layout file and add view inside.
="chintan.rathod.recyclerviewsample.MainActivity"> <android.support.v7.widget.RecyclerView android: </RelativeLayout>
In above code, we did 4 things.
– We took reference of “RecyclerView” from “support v7” library and declare it
– We gave “id” to declared “RecyclerView”
– we set “height” and “width” of it
– We set “scrollbars” to vertical mode
Step 4
Two main things require when you are working with list.
1. Entity which will bind to your View
2. Item layout file which will be used to design your item.
Step 4.1
So first we will create a class file called
Item.java and define necessary parameters in it.
public class Item { private String description; public Item(String description) { this.description = description; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } }
Step 4.2
Create a new layout file called
item_layout.xml to design item layout for our list.
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout>
In above layout we took parent as
LinearLayout and add
TextView as its child. When we see in design, it will look like
Item layout output
Step 5
In
RecyclerView , it is mandatory to define
ViewHolde . So to create it, create a new class file called
ItemHolder .
Holder class are those which are defined with layout views used in items. In our case it is only single
TextView .
public class ItemHolder extends RecyclerView.ViewHolder { public TextView txtDescription; public ItemHolder(View view) { super(view); txtDescription = (TextView) view.findViewById(R.id.txtDescription); } }
Here, we require to understand functionality and importance of
RecyclerView.Holder .
If you noticed, we declared a class and we are creating sub class of
RecyclerView.ViewHolder by extending it.
public class ItemHolder extends RecyclerView.ViewHolder
As we created sub class, we require to pass our newly created object to super class by calling
super(view) . This will pass our view object to parent class.
Rest thing are like we are doing for finding view and assigning to widget.
txtDescription = (TextView) view.findViewById(R.id.txtDescription);
This will search for id
R.id.txtDescription from provided view object and convert it to view object and assigns to provided object “txtDescription”
Step 6
All we did is just preparation for things get ready to use in adapter. But actual use will be in next Adapter which we are going to define.
Create
ItemAdapter.java class by extending
RecyclerView.Adapter
like below.
public class ItemAdapter extends RecyclerView.Adapter<ItemHolder> { private List<Item> itemList; public ItemAdapter(List<Item> itemList) { this.itemList = itemList; } @Override public ItemHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = LayoutInflater.from(parent.getContext()) .inflate(R.layout.item_layout, parent, false); return new ItemHolder(itemView); } @Override public void onBindViewHolder(ItemHolder holder, int position) { Item item = itemList.get(position); holder.txtDescription.setText(item.getDescription()); } @Override public int getItemCount() { return itemList.size(); } }
6.1 Class declaration
public class ItemAdapter extends RecyclerView.Adapter<ItemHolder>
When you write this like, application will create a sub class of “RecyclerView.Adapter<>“. Here, you require to pass “ViewHolder” you have created previously in “Step – 5”
6.2 Constructor
public ItemAdapter(List<Item> itemList) { this.itemList = itemList; }
You require to create a constructor to pass your list data so that adapter can work on it. In this example, we are passing list of “item” class which we created in “Step 4”. Simply create a file “itemList” in adapter and assign list which is passed in constructor.
6.3 ViewHolder mapping
@Override public ItemHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = LayoutInflater.from(parent.getContext()) .inflate(R.layout.item_layout, parent, false); return new ItemHolder(itemView); }
This is override method of Adapter class. It has main functionality of binding view with Holder. So in this method we need to create view by inflating layout and pass that newly created view object in our
ItemHolder constructor. Rest of work will be taken care by our “ItemHolder” class. (See Step 5)
6.4 Binding Holder with item
@Override public void onBindViewHolder(ItemHolder holder, int position) { Item item = itemList.get(position); holder.txtDescription.setText(item.getDescription()); }
As described in 6.3, we created
ItemHolder which will be returned in this method as argument. So we need to give data to bind them. As you can see in this method, we are taking item from list and setting text to “txtDescription” object.
6.5. Counting list item
@Override public int getItemCount() { return itemList.size(); }
This method will tell adapter size of item list. Whatever count you are passing here, adapter will create items for
RecyclerView and push them to it. Make sure you are passing proper size of list as in previous step you are taking object from that list. If you pass wrong argument, you will encounter an exception for IndexOutOfBoundsException .
Step 7
This is final step to assign adapter to RecyclerView. So open your
MainActivity.java file and paste below code.
public class MainActivity extends AppCompatActivity { private RecyclerView recyclerView; private ItemAdapter itemAdapter; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ArrayList<Item> itemList = new ArrayList<>(); fillDummyData(itemList); recyclerView = (RecyclerView) findViewById(R.id.recycler_view); itemAdapter = new ItemAdapter(itemList); RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(getApplicationContext()); recyclerView.setLayoutManager(mLayoutManager); recyclerView.setItemAnimator(new DefaultItemAnimator()); recyclerView.setAdapter(itemAdapter); } private void fillDummyData(ArrayList<Item> itemList) { for(int count = 1; count< 20; count++){ Item item = new Item("Item "+ count); itemList.add(item); } } }
We first need to first take view from layout and assign it to object. Then we will create an object of “ItemAdapter” with dummy data. We already prepared dummy data with help of “fillDummyData” method.
Step 8
Run your application. Output will be like below.
Download Code
Source code for this article is availablehere.
0
0 0 0
0 0
You may also like to read
Develop apps faster using Data Binding – Part 4 Google has been pushing updates to existing tools and making the life of android developers easier. They also have been releasing new libraries and AP...
Customizing TextInputLayout – Part 2 In previous article Customizing TextInputLayout - Part 1, we learnt how to customize TextInputLayout and how to apply hint color to it. Going t...
Develop apps faster using Data Binding – Par... Google has been pushing updates to existing tools and making the life of android developers easier. They also have been releasing new libraries and AP...
Secret Codes For Android Mobile Phones Every mobile device has some secret codes which can be very much useful to get information if we are aware of it. 1. Complete Information Abo...
Views: 1 | http://126kr.com/article/8i6fxdmra6p | CC-MAIN-2017-13 | refinedweb | 1,290 | 51.95 |
Posted On: Dec 16, 2019.
With this release, CloudFormation users can:
- Create an HTTP API, JSON web tokens (JWTs) Authorizer and stage for HTTP API, and specify VPC endpoint IDs of an API to create Route53 aliases in Amazon API Gateway.
- Specify a provisioned concurrency configuration for a function's alias and version in AWS Lambda.
- Specify parameter in AWS Step Functions to enable express workflows.
- Specify an access point in Amazon Simple Storage Service (S3).
- Create an analyzer for AWS IAM Access Analyzer.
- Specify a discoverer, schema, and event associated with an event bus in Amazon EventBridge.
- Configure destinations and error handling for asynchronous invocation in AWS Lambda.
- Specify the variable namespace associated with an action in AWS CodePipeline.
- Specify an Amazon Simple Queue Service (Amazon SQS) queue or Amazon Simple Notification Service (Amazon SNS) topic destination for discarded records, the maximum age of a record that AWS Lambda sends to a function for processing, the maximum number of times to retry when a function returns an error, the number of batches to process from each shard concurrently, as well as to split a batch in two and retry if a function returns an error in AWS Lambda.
- Specify the granularity, in seconds, of returned data points in Amazon Cloudwatch Alarms.
- Specify which task set in a service is the primary task set and create a task set in the specified cluster and service, the setting to use when creating a cluster, the deployment controller to use for the service, the FireLens configuration for the container cluster, the total amount of swap memory (in MiB) a container can use. You can also tune a container's memory swappiness behavior in Amazon ECS.
- Use the latest version of AWS WAF, a web application firewall that lets users monitor HTTP(S) requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront, or an Application Load Balancer.
- Create a Contributor Insights rule in Amazon CloudWatch Logs.
- Specify the caching behavior, the conflict detection and resolution strategy, the ARN of the AWS Lambda that is used for handling conflicts, of your AWS AppSync resolver, the delta sync configurations for your versioned AWS AppSync data source and enable resolver caching with AWS AppSync.
- Enable the HTTP endpoint for an Aurora Serverless DB cluster and use Kerberos Authentication to authenticate users that connect to the DB instance in Amazon RDS.
- Specify any tags for the Elastic IP address in Amazon EC2.
- Configure Amazon Elasticsearch to use Amazon Cognito authentication for Kibana.
- Specify which version of AWS Glue a machine learning transform is compatible with.
- Specify a list of tags that you want to attach to the newly created user in AWS IAM.
- Specify a custom domain on an OpsWorks for Chef Automate Server running Chef Automate 2.0, a PEM-formatted HTTPS certificate for a server with a custom domain and a private key in PEM format for connecting to a server that uses a custom domain in AWS OpsWorks.
- Specify the redrive policy JSON assigned to the subscription in Amazon SNS.
- Use ZipFile in nodejs10.x for AWS Lambda RunTime.
- Create a new managed node group in Amazon EKS.
These resources are now available in all public AWS Regions as well as all AWS GovCloud Regions. For more information, see the AWS Region Table.
For more information, please refer to the CloudFormation release history page. | https://aws.amazon.com/about-aws/whats-new/2019/12/aws-cloudformation-updates-for-api-gateway-codepipeline-s3-iam-ecs-rds-es-lambda-and-more/?utm_source=newsletter&utm_medium=email&utm_content=offbynone&utm_campaign=Off-by-none%3A%20Issue%20%2368 | CC-MAIN-2020-34 | refinedweb | 563 | 52.6 |
{-# LANGUAGE NoImplicitPrelude #-} ----------------------------------------------------------------------------- -- | -- Module : Math.Combinatorics.Species.CycleIndex -- Copyright : (c) Brent Yorgey 2010 -- License : BSD-style (see LICENSE) -- Maintainer : byorgey@cis.upenn.edu -- Stability : experimental -- -- The Newton-Raphson iterative method for computing with recursive -- species. Any species @T@ which can be written in the form @T = -- X*R(T)@ (the species of "@R@-enriched rooted trees") may be -- computed by a quadratically converging iterative process. In fact -- we may also compute species of the form @T = N + X*R(T)@ for any -- integer species @N@, by iteratively computing @T' = X*R(T' + N)@ -- and then adding @N@. -- ----------------------------------------------------------------------------- module Math.Combinatorics.Species.NewtonRaphson ( newtonRaphsonIter , newtonRaphson , newtonRaphsonRec , solveForR ) where import NumericPrelude import PreludeBase import Math.Combinatorics.Species.Class import Math.Combinatorics.Species.AST import Math.Combinatorics.Species.AST.Instances (reflect) import Math.Combinatorics.Species.Simplify import Data.Typeable import Control.Monad (guard) import Data.List (delete) -- | A single iteration of the Newton-Raphson method. -- @newtonRaphsonIter r k a@ assumes that @a@ is a species having -- contact of order @k@ with species @t = x '*' (r ``o`` t)@ (that -- is, @a@ and @t@ agree on all label sets of size up to and -- including @k@), and returns a new species with contact of order -- @2k+2@ with @t@. -- -- See BLL section 3.3. newtonRaphsonIter :: Species s => s -> Integer -> s -> s newtonRaphsonIter r k a = a + sum as where p = x * (r `o` a) q = x * (oneHole r `o` a) ps = map (p `ofSizeExactly`) [k+1..2*k+2] qs = map (q `ofSizeExactly`) [1..k+1] as = zipWith (+) ps (map (sum . zipWith (*) qs) $ map reverse (inits' as)) -- | Lazier version of inits. inits' :: [a] -> [[a]] inits' xs = [] : inits'' xs where inits'' [] = [] inits'' (x:xs) = map (x:) (inits' xs) -- | Given a species @r@ and a desired accuracy @k@, @'newtonRaphson' -- r k@ computes a species which has contact at least @k@ with the -- species @t = x '*' (r ``o`` t)@. newtonRaphson :: Species s => s -> Integer -> s newtonRaphson r n = newtonRaphson' 0 0 where newtonRaphson' a k | k >= n = a | otherwise = newtonRaphson' (newtonRaphsonIter r k a) (2*k + 2) -- | @'newtonRaphsonRec' f k@ tries to compute the recursive species -- represented by the code @f@ up to order at least @k@, using -- Newton-Raphson iteration. Returns 'Nothing' if @f@ cannot be -- written in the form @f = X*R(f)@ for some species @R@. newtonRaphsonRec :: (ASTFunctor f, Species s) => f -> Integer -> Maybe s newtonRaphsonRec code k = fmap (\(n,r) -> n + newtonRaphson r k) (solveForR code) -- | Given a code @f@ representing a recursive species, try to find an -- integer species N and species R such that @f = N + X*R(f)@. If -- such species can be found, return @'Just' (N,R)@; otherwise -- return 'Nothing'. solveForR :: (ASTFunctor f, Species s) => f -> Maybe (s, s) solveForR code = do let terms = sumOfProducts . erase' $ apply code (TRec code) guard . not . null $ terms -- If there is a constant term, it will be the first one; pull it -- out. let (n, terms') = case terms of ([One] : ts) -> (One, ts) ([N n] : ts) -> (N n, ts) ts -> (Zero, ts) -- Now we need to be able to factor an X out of the rest. guard $ all (X `elem`) terms' -- XXX this is wrong, what if there are still occurrences of X remaining? -- Now replace every recursive occurrence by (n + X). let r = foldr1 (+) $ map ( foldr1 (*) . map (substRec code (n + x)) . delete X) terms' return (reflect n, reflect r) | http://hackage.haskell.org/package/species-0.3.0.2/docs/src/Math-Combinatorics-Species-NewtonRaphson.html | CC-MAIN-2015-14 | refinedweb | 555 | 56.05 |
NAMEdevname — get device name
LIBRARYStandard C Library (libc, -lc)
SYNOPSIS#include < sys/stat.h>
#include < stdlib.h>
char *
devname( dev_t dev, mode_t type);
char *
devname_r( dev_t dev, mode_t type, char *buf, int len);
char *
fdevname( int fd);
char *
fdevname_r( int fd, char *buf, int len);
DESCRIPTIONThe devname() function returns a pointer to the name of the block or character device in /dev with a device number of dev, and a file type matching the one encoded in type which must be one of S_IFBLK or S_IFCHR. To find the right name, devname() asks the kernel via the kern.devname sysctl. If it is unable to come up with a suitable name, it will format the information encapsulated in dev and type in a human-readable format.
The fdevname() and fdevname_r() function obtains the device name directly from a file descriptor pointing to a character device. If it is unable to come up with a suitable name, these functions will return a NULL pointer.
devname() and fdevname() return the name stored in a static buffer which will be overwritten on subsequent calls. devname_r() and fdevname_r() take a buffer and length as argument to avoid this problem.
EXAMPLES
int fd; struct stat buf; char *name; fd = open("/dev/tun"); fstat(fd, &buf); printf("devname is /dev/%s\n", devname(buf.st_rdev, S_IFCHR)); printf("fdevname is /dev/%s\n", fdevname(fd)); | http://www.yosbits.com/opensonar/rest/man/freebsd/man/en/man3/fdevname_r.3.html?l=en | CC-MAIN-2017-51 | refinedweb | 228 | 52.09 |
Opened 6 years ago
Closed 23 months ago
#14671 closed Bug (wontfix)
Allow overriding of ModelChoiceField.choices
Description
This patch fixes a validation bug when attempting to override choices manually on a ModelChoiceField. Comments in code suggest this should be able to be done.
Note, this used to work until the recent model/form validation changes.
Attachments (1)
Change History (11)
Changed 6 years ago by
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
-1 on the addition of BaseModelFormSet to
__all__. Although the classes can be useful, they are abstract, and can be imported explicitly with:
from forms.models import BaseModelFormSet
For the problem at hand, a test that illustrates the bug would help immensely. Reviewing the patch, it looks reasonable, except the choice of variable name for
pk, given that something else could be in
self.to_field_name.
comment:4 Changed 6 years ago by
comment:5 Changed 6 years ago by
The patch needs documentation, tests and reasons why BaseModelFormSet and BaseInlineFormset should be added to all.
comment:6 Changed 6 years ago by
comment:7 Changed 5 years ago by
Change UI/UX from NULL to False.
comment:8 Changed 5 years ago by
Change Easy pickings from NULL to False.
comment:9 Changed 23 months ago by
comment:10 Changed 23 months ago by
I can't think of a use case where it makes sense to add abitrary choices to the ones available for a ModelChoiceField via its queryset. See for a way to use a normal ChoiceField to accomplish this.
If the desire is to change the appearance of the choices when they are rendered, there is the
ModelChoiceField.label_from_instance method that can be overridden.
Note the patch has also swept up the addition of 'BaseModelFormSet', 'BaseInlineFormSet' to all (they can be useful!) | https://code.djangoproject.com/ticket/14671 | CC-MAIN-2017-09 | refinedweb | 312 | 61.26 |
A queue is a data structure to which we can insert data and also delete from it. Unlike stack, which follows the LIFO principle ( Last In First Out ), queue implements the FIFO principle ( First In First Out ). In a queue of people, the first person to enter the line is also the first person to exit from it. Similarly, the first item pushed inside a queue is also the first item to be popped from the queue. In this article, we shall look into a queue and implement the peek function in the python queue.
Operations inside a queue
Two main operations can be performed on a queue – enqueue and dequeue. Before understanding the two operations, we need first to understand ‘front’ and ‘rear’ in a queue. ‘Front’ and ‘Rear’ are indexes to the first and last element of the queue, respectively. We use ‘Front’ when we want to delete an element from the queue and ‘Rear’ when we want to insert an element from the queue.
- Enqueue : The enqueue operation is used to add an item to the queue. Before performing an enqueue operation, we will have to check whether the queue is full or not. Only if the queue is has space, we will conduct an enqueue operation. To perform enqueue operation, we have to use ‘Rear’ as new elements are added from the rear end of the queue.
- Dequeue : The dequeue operation is used to pop elements from the queue. While performing dequeue, we will first check if the queue is empty or not. If the queue is empty, then we shall specify an ‘Underflow’ condition else we shall pop an element. To preform dequeue operation, we have to use the ‘Front’ to access the topmost element from the queue in order to pop it.
Queue Using a Class
We will define a queue using a class. We shall define inside the class initially the two operations – enqueue and dequeue and a print function to print the queue.
The name of the class is ‘Queue’. First, we shall define the __init__() function, which takes two parameters – self and size of the queue. We use a list named ‘queue’ to represent the queue. The list ‘queue’ is defined with None values, and its size is the size of the queue.
Initially, the front and rear are set to 0. They represent the index of the first and the last element in the list queue, respectively. We also have an ‘available’ variable which represents the amount of available spaces inside the queue. Since the queue is empty at the beginning, ‘available’ is set to the size of the queue. print_queue(self): print(self.queue)
Enqueue function
The enqueue function takes two arguments – self and the item to be inserted in the queue. First, we check whether the queue contains any available space or not. If self.available is zero, then we would print ‘Queue Overflow’. Otherwise, using the ‘rear’ as in index, we shall assign the value ‘item’ at that index. Then, we shall increment ‘rear’ by one and decrement ‘available’ by one.
Dequeue function
The dequeue function takes one argument, which is self. First, we shall check if the self.available is equal to self.size or not. If it is, then it means that the queue is empty, and so it will print Queue Underflow. Else, we would assign the queue item at the ‘front’ index back to ‘None’. Then, we shall increment self.front and self.available by one.
We also have a print_queue() function for printing the queue.
Now, we shall create a queue class object named ‘queue1’. To that, we shall first add two items – 10 and 20 and then perform a dequeue operation once. Then, we shall print the queue. It will only contain one element – 20 at the second position.
queue1 = Queue(4) queue1.enqueue(10) queue1.enqueue(20) queue1.dequeue() queue1.print_queue()
Output:
[None, 20, None, None]
Peek function in queue
A peek function in the python queue is used to print the first element of the queue. It returns the item which is present at the front index of the queue. It will not remove the first element but print it.
Let us construct a function inside the class queue to perform a peek operation. The name of the function is ‘peek’, and it accepts only one argument – self. Inside the function body, we shall print the list queue with index as ‘self.front’.
def peek(self): print(self.queue[self.front])
To call the function, we shall use queue1.peek() syntax.
queue1.peek()
The entire code for the queue class is: peek(self): print(self.queue[self.front]) def print_queue(self): print(self.queue) queue1 = Queue(4) queue1.enqueue(10) queue1.peek() queue1.enqueue(20) queue1.dequeue() queue1.peek() queue1.print_queue()
The output is:
10 20 [None, 20, None, None]
How to add peek function to priority queue?
A priority queue is a type of queue where every element has a priority value associated with it. Unlike a normal queue where the FIFO principle is implemented, in the case of a priority queue, the values are removed based on their priority.Item with the lowest value is at the ‘front’ end, and the item with the highest value is at the ‘rear’ end.
Here, we shall implement the peek function inside the priority queue. First, we shall import the PriorityQueue class from the built-in queue module. Then, we have a user-defined class named PQueue which inherits the PriorityQueue class. Inside the PQueue class, we shall define a function named ‘peek’, which returns the first element of ‘queue’.
from queue import PriorityQueue class PQueue(PriorityQueue): def peek(self): return self.queue[0] myQueue = PQueue() myQueue.put(12) myQueue.put(2) myQueue.put(1) myQueue.put(7) print(myQueue.peek())
Then, we create an object of the PQueue class named ‘myQueue’. The PriorityQueue class has a function named put() to add elements to the queue. By accessing the put() function using myQueue, we shall add elements to the queue. Then we shall call the peek() function. Since it is a priority queue, the element with the lowest value will be pointed by ‘front’.
The output will be:
1
That sums up queue peek in python. If you have any questions in your mind, feel free to leave them in the comments below.
Until next time, Keep Learning! | https://www.pythonpool.com/python-queue-peek/ | CC-MAIN-2021-43 | refinedweb | 1,068 | 74.9 |
We are given with a binary tree and the task is to calculate the count of non-leaf nodes available in a binary tree.
Binary Tree is a special data structure used for data storage purposes. A binary tree has a special condition that each node can have a maximum of two children. A binary tree has the benefits of both an ordered array and a linked list as search is as quick as in a sorted array and insertion or deletion operation are as fast as in linked list. Non-leaf nodes are also known as parent nodes as they have more than 0 child and less than two children.
Structure of a binary tree is given below −
Input −
Output − count of non-leaf nodes is: 3
Explanation − In the given tree, we have 27, 14 and 35 as non-leaf nodes since they have more than 0 children and less than 2 children.
Create the structure of a binary tree containing, pointer to a left node, pointer to a right node and a data part stored in a node
Create a function that will insert a node whenever this function is called. For that, insert data in a new node and also set the right and left pointer of a new node to a NULL and return the node.
Create a recursive function that will count the number of non-leaf nodes in a binary tree.
Print the count
#include <iostream> using namespace std; // Node's structure struct Node { int data; struct Node* left; struct Node* right; }; // To define the new node struct Node* newNode(int data){ struct Node* node = new Node; node->data = data; node->left = node->right = NULL; return (node); } // Count the non leaf nodes. int nonleaf(struct Node* root){ if (root == NULL || (root->left == NULL && root->right == NULL)){ return 0; } return 1 + nonleaf(root->left) + nonleaf(root->right); } // Main function int main(){ struct Node* root = newNode(10); root->left = newNode(21); root->right = newNode(33); root->left->left = newNode(48); root->left->right = newNode(51); cout << nonleaf(root); return 0; }
If we run the above code it will generate the following output −
count of non-leaf nodes is: 2 | https://www.tutorialspoint.com/count-non-leaf-nodes-in-a-binary-tree-in-cplusplus | CC-MAIN-2021-17 | refinedweb | 364 | 57.54 |
Here at ZuluOneZero we love guns in games and other fantasy settings for their ability to embody drama, tension and action. They are a super power that takes us beyond the normal abilities to exert force. Sadly in real life guns suck and if you like shooting guns for real stay the heck away from me and my friends.
Anywhoo… this is how we use Guns in Endless Elevator to dispatch the Bad Guys and generally wreak havoc in an otherwise quietly innocent building.
This is our hero The LawMan! See he comes with that super powered lawmaster pistol, standard issue bullet proof vest, sherrif’s hat, and a make no mistakes we mean business moustache (can you tell he’s smiling under that?).
This is how he looks in the Unity Scene Editor. See those two Objects he has as Children “Gun” and “smoke”? They sit invisibly just where the 3D curser is in the image below…just at the end of the gun barrel. The Gun object is used as the spawn point for bullets and smoke is one of two particle systems that go off when the gun fires (the other particle system does sparks and is attached as a component directly to the Game Object).
There are four scripts that handle the basic Gun actions in our game. There are of course plenty of ways to do this – but this is the way we do it for this game. One script handles the aiming of the gun as part of the Character Controller. Another script attached to the Character handles the firing of the Gun and the spawning of the bullets. The Bullets have their own script that handles Gravity, Acceleration and Animations while alive and during Collisions. The last script attached to the Bad Guy handles the impact effects of the Bullets and the “blood”.
We wanted to keep the cartoon elements of gun violence in this game and get away from realism as much as possible. That said a bullet strike has a pretty huge impact and when we had a red coloured particle system for the blood it looked really gruesome. We changed the impact force to be over the top and super exaggerated so that it’s funnier reaction and moved the particle system color to yellow (might change it to stars later on). The Bullets are supposed to look like expanding rubber dum dum bullets so they grow out of the gun and enlarge a little bit in-flight. After a collision they start to shrink again and get very bouncy.
Here is a sample of game play where our hero blasts away at some chump.
So the Hero Character has a couple of scripts that handle aiming and firing.
The snippet below is the aiming component of the Character Controller. The Player has a wide Trigger Collider out the front that picks up when it hit’s a Bad Guy. If there are no _inputs from the Controller (ie. the Player stops moving) then he will automagically rotate towards the Bad Guy and thus aim the gun at him.
void OnTriggerStay(Collider otherObj) { if (otherObj.name == "bad_spy_pnt" || otherObj.name == "Knife_spy") { if (_inputs == Vector3.zero) { Vector3 lookatposi = new Vector3(otherObj.transform.position.x, transform.position.y, otherObj.transform.position.z); transform.LookAt(lookatposi); } } }
Now once we are aiming we can fire (you can shoot at anytime – and straffing is pretty fun – but it’s far easier to hit if you stop and let the auto-aim work for you). For firing this is what the FireBullet script exposes in the Editor:
There is a “bullet” prefab which we will talk about below, the Gun transform (ie. bullet spawn point), the force at which the bullet is spawned with, the audio for the shot, and the particle system for the smoke (called PartyOn …. I know).
The script itself is pretty straightforward: The bullet is spawned at the Gun Transform with a direction and force, the gun noise goes off, and the particle system does smoke and sparks. After two seconds the bullet is destroyed. This is what it looks like:
using UnityEngine; public class FireBullet : MonoBehaviour { public GameObject bulletPrefab; public Transform bulletSpawn; public float force; public AudioClip fireNoise; private AudioSource MyAudio; public ParticleSystem partyOn; public bool includeChildren = true; void Start () { MyAudio = GetComponent(); partyOn = GetComponent(); } void Update() { if (Input.GetKeyDown(KeyCode.Space)) { Fire(); } } public void Fire() { partyOn.Play(includeChildren); var bullet = (GameObject)Instantiate(bulletPrefab, bulletSpawn.transform.position, bulletSpawn.transform.rotation); MyAudio.Play(); bullet.GetComponent().AddForce(transform.forward * force); Destroy(bullet, 2.0f); } }
One of the really important lessons we learned from this script is to get the Transforms and X/Y/Z directions of your models imported into Unity in the right direction first up. We had a few different models for bullets over the last few weeks ranging from simple cylinders, to pillows and bean bags, and real bullet shapes. It makes it so much easier to direct objects if their rotations are correct to start with. For example we did one quick model of a cylinder but had it sitting on the Z axis instead of X so when we did the “forward” force the bullet would travel sideways.
This is how our bullet looks now:
This is how the script to handle it’s behaviours and the settings of it’s Rigidbody and Collider:
It’s got a Rubber material on the Collider so that it bounces around when Gravity is enabled in the Rigidbody on Collision. We disabled Gravity so that we could slow down the Bullet firing path and not have to use so much Force. Having a slow bullet adds to the cartoon drama, reinforces the rubber bullet idea, and looks less like killing force. Here is the script:
using UnityEngine; public class BulletGravity : MonoBehaviour { private Rigidbody rb; public float force; public float accelleration; public bool collided; public float scaleFactorUp; public float scaleFactorDown; public float maxScale; public float bounceForce; void Start () { rb = GetComponent(); } void Update() { if (transform.localScale.x scaleFactorDown) { transform.localScale -= new Vector3(scaleFactorDown, scaleFactorDown, scaleFactorDown); } } } void FixedUpdate () { if (!collided) { rb.AddForce(transform.forward * force * accelleration); } } void OnCollisionEnter(Collision col) { collided = true; rb.useGravity = true; rb.velocity = transform.up * bounceForce; } }
Below is the Collision part of the script that handles the BadGuy flying up into the air and bleeding all over the place.
void OnCollisionEnter(Collision col) { colName = col.gameObject.name; if (colName == "bullet(Clone)") { hitpoint = col.transform.position; GameObject BloodObject = Instantiate(BloodPrefab, hitpoint, new Quaternion(0, 0, 0, 0), TargetObject.transform) as GameObject; Destroy(BloodObject, 5.0f); // We disable his AI script so he doesn't try and walk around after being shot :) It's not a zombie game. var AI_script = GetComponent(); if (AI_script) { AI_script.enabled = false; } Vector3 explosionPos = transform.position; explosionPos += new Vector3(1, 0, 0); rb.AddExplosionForce(power, explosionPos, radius, offset, ForceMode.Impulse); yesDead = true; Destroy(transform.parent.gameObject, 2f); } }
Let’s break down the whole process visually. Here is our Hero in a standoff with a Baddy. He hasn’t aimed yet but you can see his big aiming collider in yellow underneath him extending out front and of course the character box colliders are visible too in green. (The Bad Guy has a yellow sphere collider on his head cause we make him get squashed by lifts!).
Here we are just after firing. His aiming script has put him on target and his bullet is there travelling with force and bloating in size.
And this is the impact with the force applied to the Bad Guy and his yellow blood impact stuff spewing out dramatically for effect.
All we got to do now is to clean up the Bad Guy’s object and destroy the used bullets. We don’t do any object pooling for the bullets yet…the overhead for the current system has warranted it but maybe later on.
That’s about it. Hope you’ve enjoyed this gun for fun expo-say.
Trixie out. | https://www.zuluonezero.net/2019/03/15/unity-how-to-do-guns/ | CC-MAIN-2021-31 | refinedweb | 1,326 | 62.98 |
sys/msg.h - message queue structures
#include <sys/msg.h>
The <sys/msg.h> header defines the following constant and members of the structure msqid_ds.
The following data types are defined through typedef:
- msgqnum_t
Used for the number of messages in the message queue.Used for the number of messages in the message queue.
- msglen_t
Used for the number of bytes allowed in a message queue.Used for the number of bytes allowed in a message queue.
These types are unsigned integer types that are able to store values at least as large as a type unsigned short.
Message operation flag:
- MSG_NOERROR
- No error if big message.
The structure msqid_ds and size_t types are defined as described in <sys/types.h>.
The following are declared as functions and may also be defined as macros. Function prototypes must be provided for use with an ISO C compiler.
int msgctl(int, int, struct msqid_ds *); int msgget(key_t, int); ssize_t msgrcv(int, void *, size_t, long int, int); int msgsnd(int, const void *, size_t, int);
In addition, all of the symbols from <sys/ipc.h> will be defined when this header is included.
None.
None.
msgctl(), msgget(), msgrcv(), msgsnd(), <sys/types.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/sysmsg.h.html | crawl-003 | refinedweb | 198 | 67.76 |
GCC
GCC Profiling
Overview
To use profiling, the program must be compiled and linked with the
-qg profiling option:
We will use an called profiling_test.c (full code can be found at):
#include <stdio.h> int fibonacci(int n) { if(n == 0) return 0; else if(n == 1) return 1; else return(fibonacci(n-1) + fibonacci(n-2)); } int loop100M() { int val = 0; for(int i = 0; i < 100000000; i++) { if(i % 10 == 0) val++; else if(i % 3) val--; } return val; } int main (void) { printf("Fibonacci value = %u\n", fibonacci(40)); printf("Loop value = %u\n", loop100M()); return 0; }
We will then compile it with the command:
$ gcc -pg profiling_test.c -o profiling_test
This creates what is called an instrumented executable. It contains additional code which records the time spent in each function.
When run, the program will produce a file
gmon.out in the same directory as it is run. You can pass your program to gprof to display the profiling results:
$ gprof ./profiling_test Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 60.96 0.68 0.68 1 676.63 676.63 fibonacci 31.84 1.03 0.35 1 353.47 353.47 loop100M 8.19 1.12 0.09 frame_dummy ...
You can see above that approximately 60% of the time was spent calculating the Fibonacci sequence, while 30% was spent looping 100 million times. If this was a real life scenario, you could now start to optimise your code!
If you find text hard to analyze, see the gprof2dot section below on how to create a visualization of the above results.
The above command will write the profiling results to the terminal. Instead, if you wish to write the results to a file, use the following command:
$ gprof profiling_test > profiling_results.txt
Clean Exiting
gmon.out is only written to if your C/C++ program exits cleanly, that is, it either calls
exit() or returns from
main().
Here is the relevant info from the gprof manual:
The profiled program must call
"exit"(2)or return normally for the profiling information to be saved in the
gmon.outfile.
Your program doesn’t count as a clean exit if it is running in a Linux terminal and Ctrl-C is pressed! However, there is a way to fix this, by catching the
Ctrl-C signal and writing to the file before exiting…
#include <dlfcn.h> #include <stdio.h> #include <unistd.h> void SigIntHandler(int sig) { fprintf(stderr, "Exiting on SIGUSR1\n"); void (*_mcleanup)(void); _mcleanup = (void (*)(void)) dlsym(RTLD_DEFAULT, "_mcleanup"); if (_mcleanup == NULL) fprintf(stderr, "Unable to find gprof exit hook\n"); else _mcleanup(); _exit(0); } int main() { signal(SIGINT, SigIntHandler); ... code that does not return here }
gprof2dot
gprof2dot is a tool that can create a visualization of the gprof output. TO install
gprof2dot:
$ pip install gprof2dot
To install
graphviz (which is needed if you are going to make “dot” graphs like below):
$ sudo apt install graphviz
To create a dot graph image:
$ gprof2dot ./profiling.txt | dot -Tpng -o profiling.png
This created the below image for the example code above:
Related Content:
- GCC Compiler Optimisation Levels
- Passing A C++ Member Function To A C Callback
- STM32CubeIDE
- GCC Compiler Errors And How To Fix Them
- pybind11 | https://blog.mbedded.ninja/programming/compilers/gcc/gcc-profiling/ | CC-MAIN-2021-17 | refinedweb | 552 | 63.8 |
Markov Chain Monte Carlo Simulation For Airport Queuing Network
[ad_1]
Right this moment we’ll introduce a brand new set of algorithms referred to as Markov Chain Monte Carlo which will not fall in supervised learning algorithms. On this weblog put up we are going to stroll by way of, what Markov Chains are and the place we will use it.
We are going to introduce the primary household of algorithms, identified collectively as Markov Chain Monte Carlo (MCMC), that permits us to approximate the posterior distribution as calculated by Bayes’ Theorem.
Particularly, we contemplate the Metropolis Algorithm, which is well said and comparatively easy to grasp. It serves as a helpful start line when studying about MCMC earlier than delving into extra refined algorithms resembling Metropolis-Hastings, Gibbs Samplers, and Hamiltonian Monte Carlo.
As soon as we’ve got described how MCMC works, we are going to carry it out utilizing the open-source PyMC3 library, which takes care of lots of the underlying implementation particulars, permitting us to focus on Bayesian modeling.
Markov Chain Monte Carlo Simulation For Airport Queuing Community
Earlier than we drive additional let’s take a look at what you’ll be taught by the top of the article.
Markov chain Monte Carlo analogy
Earlier than getting began we’ll attempt to perceive the analogy behind Markov Chains. After we are getting right into a studying curve within the subject of analytics we’ve got varied divisions like first we’ll begin with forecasting after which linear regression after we’ll get into classification algorithms that are non-parametric fashions.
After this curve, we’ll get into Neural Networks like CNN, R-CNN, Auto-encoders so on, and so forth.
As soon as these are accomplished now we are going to get into the stage of Markov Chains(MC) and Hidden Markov Chains (HMC) that are purely stochastic fashions so as to make a press release on random predictions.
Let’s perceive it by instance,
Decision tree algorithms can say whether or not they will purchase it or not. Random forest says what are the assorted situations should be happy so as to make a press release. The logistic algorithm can say sure/no statements utilizing sigmoid equations. When stepping into CNN they will acknowledge the photographs and by utilizing RNN they will do sequential tasking.
The place we use Markov chains
All of the above set of algorithms classes are meant to foretell the function utilizing the historic knowledge, But when we wish to predict like for those who’re sitting in a restaurant and also you’re ready for the waiter so as to take up the order proper.
So which algorithm be used so as to make the assertion?.
Whether or not Virat Kohli going to hit a six or not,
which algorithm we have to use? In these situations we will’t go into deep studying algorithms, we will’t do forecasting proper. So we wish to make a press release on the above statements primarily based on the present occasion we wish to make a prediction in regards to the subsequent ball.
So for all these prompt primarily based predictions, the one route we’ve got is Markov Chains and Hidden Markov Chains. Markov Chains are one of many highly effective algorithms the place we’re capable of extract or capable of make a press release on random occasions.
For these sorts of occasions, we desire Markov Chains. Earlier than we find out about markov chains, we have to find out about bayes’s rule.
Let’s spend a while now.
Bayes’s Rule
If we recall Bayes’s Rule:
The place:
We are able to see that we have to calculate the proof P(D). In an effort to obtain this we have to consider the next integral, which integrates over all attainable values of, the parameters:
The elemental drawback is that we are sometimes unable to guage this integral analytically and so we should flip to a numerical approximation methodology as an alternative.
An extra drawback is that our fashions may require numerous parameters. Because of this our prior distributions might probably have numerous dimensions.
This in flip implies that our posterior distributions can even be excessive dimensional. Therefore, we’re in a state of affairs the place we’ve got to numerically consider an integral in a probably very massive dimensional area.
This implies we’re in a state of affairs usually described because the Curse of Dimensionality. Informally, which means the amount of a high-dimensional area is so huge that any accessible knowledge turns into extraordinarily sparse inside that area and therefore results in issues of statistical significance.
Virtually, so as to achieve any statistical significance, the amount of information wanted should develop exponentially with the variety of dimensions.
Such issues are sometimes extraordinarily troublesome to sort out until they’re approached in an clever method. The motivation behind Markov Chain Monte Carlo’s strategies is that they carry out an clever search inside a excessive dimensional area and thus Bayesian Fashions in excessive dimensions grow to be simple to manage.
The essential thought is to pattern from the posterior distribution by combining a “random search” with a mechanism for intelligently “leaping” round, however in a fashion that finally doesn’t rely upon the place we began from.
Therefore Markov Chain Monte Carlo strategies are memoryless searches carried out with clever jumps.
The Metropolis Algorithm
There’s a massive household of Algorithms that carry out MCMC. Most of those algorithms may be expressed at a excessive stage as follows:
- Start the algorithm on the present place in parameter area.
- Suggest a “leap” to a brand new place in parameter area.
- Settle for or reject the leap probabilistically utilizing the prior data and accessible knowledge.
- If the leap is accepted, transfer to the brand new place and return to step 1.
- If the jumps are rejected, keep the place you might be and return to step 1.
- After a set of quite a lot of jumps has occurred, return all accepted positions.
The primary distinction between MCMC algorithms happens in the way you leap in addition to the way you resolve whether or not to leap.
The Metropolis algorithm makes use of a traditional distribution to suggest a leap. This regular distribution has a imply worth μ which is the same as the present place and takes a “proposal width” for its normal deviation σ.
A traditional distribution is an effective selection for such a proposal distribution (for steady parameters), it’s extra prone to choose factors nearer to the present place than additional away. Nonetheless, it is going to sometimes select factors additional away, permitting the area to be explored.
As soon as the leap has been proposed, we have to resolve (in a probabilistic method) whether or not it’s a good transfer to leap to the brand new place. How will we do that? We calculate the ratio of the proposal distribution of the brand new place and the proposal distribution on the present place to find out the likelihood of transferring, p:
About PyMC3
PyMC3 is a Python package deal for Bayesian statistical modeling and probabilistic machine studying which focuses on superior Markov chain Monte Carlo and variational becoming algorithms. It’s a rewrite from scratch of the earlier model of the PyMC software program.
Simulation utilizing PyMC3
The instance we wish to mannequin and simulate relies on this situation: a each day flight from London to Rome has a scheduled departure time at 12:00 am, and a regular flight time of two hours.
We have to arrange the operations on the vacation spot airport, however we do not wish to allocate assets when the aircraft hasn’t landed but. Subsequently, we wish to mannequin the method utilizing a Bayesian community and contemplating some frequent elements that may affect the arrival time.
Particularly, we all know that the onboarding course of may be longer than anticipated, in addition to the refueling one, even when they’re carried out in parallel. London air visitors management also can impose a delay, and the identical can occur when the aircraft is approaching Rome. We additionally know that the presence of tough climate could cause one other delay as a consequence of a change of route.
We are able to summarise this evaluation with the next plot
Bayesian community representing the air visitors management drawback
Contemplating our expertise, we resolve to mannequin the random variables utilizing the next distributions:
- Passenger onboarding ~ Wald(µ = 0.5, λ = 0.2)
- Refueling ~ Wald( µ = 0.25, λ = 0.5)
- Departure visitors delay ~ Wald(µ = 0.1, λ = 0.2)
- Arrival visitors delay ~ Wald(µ = 0.1, λ = 0.2)
- Departure time = 12 + Departure visitors delay + max(Passenger onboarding, Refueling)
- Tough climate ~ Bernoulli(p =0.35)
- Flight time ~ Exponential(λ = 0.5 – (0.1 . Tough climate))(The output of a Bernoulli distribution is Zero or 1 akin to False and True)
- Arrival time = Departure time + Flight time + Arrival visitors delay
Departure time and Arrival time are capabilities of random variables, and the parameter λ of Flight time can also be a perform of Tough Climate.
Markov Chain Monte Carlo Simulation with PyMC3
Even when the mannequin shouldn’t be very complicated, the direct inference is quite inefficient, and due to this fact we wish to simulate the method utilizing PyMC3.
Step one is to create a mannequin occasion:
Any further, all operations should be carried out utilizing the context supervisor offered by the mannequin variable. We are able to now arrange all of the random variables of our Bayesian community
We’ve imported two namespaces, pymc3.distributions.steady and pymc3.distributions.discrete as a result of we’re utilizing each sorts of variables.
Wald and exponential are steady distributions, whereas Bernoulli is discrete. Within the first three rows, we declare the variables passenger_onboarding, refueling, and departure_traffic_delay.
The construction is all the time the identical: we have to specify the category akin to the specified distribution, passing the identify of the variable and all of the required parameters.
The departure_time variable is asserted as pm. Deterministic. In PyMC3, which means, as soon as all of the random components have been set, its worth turns into fully decided.
Certainly, if we pattern from departure_traffic_delay, passenger_onboarding, and refueling, we get a decided worth for departure_time. On this declaration, we have additionally used the utility perform pmm.swap, which operates a binary selection primarily based on its first parameter (for instance, if A > B, return A, else return B).
The opposite variables are very comparable, aside from flight_time, which is an exponential variable with a parameter λ, which is a perform of one other variable (rough_weather). As a Bernoulli variable outputs 1 with likelihood p and Zero with likelihood 1 – p, λ = 0.four if there’s tough climate, and 0.5 in any other case.
As soon as the mannequin has been arrange, it is attainable to simulate it by way of a sampling course of. PyMC3 picks the perfect sampler robotically, in keeping with the kind of variables. Because the mannequin shouldn’t be very complicated, we will restrict the method to 500 samples:
The output may be analyzed utilizing the built-in pm.traceplot() perform, which generates the plots for every of the pattern’s variables. The next graph reveals the element of one in all them:
Distribution and samples for the arrival time random variable
PyMC3 offers a statistical abstract that may assist us in making the precise choices utilizing pm.abstract(). Within the following snippet, the output containing the abstract of a single variable is proven:
For every variable, it comprises imply, normal deviation, Monte Carlo error, 95% highest posterior density interval, and the posterior quantiles. In our case, we all know that the aircraft will land at about 15:10 (15.174).
That is solely a quite simple instance to point out the ability of Bayesian networks.
Listing of parametric and non-parametric Algorithms
Machine studying algorithms may be labeled as two distinct teams: parametric and non-parametric
We are able to classify algorithms as non-parametric when fashions grow to be extra complicated if the variety of samples within the coaching set will increase. Vice versa, a mannequin can be parametric if the mannequin turns into secure when the variety of examples within the coaching set will increase.
In easy phrases, we will say parametric has a purposeful type whereas non-parametric has no purposeful type.
Useful type contains a easy method like y = f(x). So for those who enter a worth, you might be to get a set output worth. It means if the information set is modified or being modified there may be not a lot variation within the outcomes. However in non-parametric algorithms, a small change in knowledge units may end up in a big change in outcomes.
- Non-parametric fashions
- Parametric fashions
Listing of strategies that may carry out MCMC
- The metropolis algorithm
- The Metropolis-Hasting algorithm
- The Gibbs sampler
- Hamiltonian Monte Carlo
- No U-turn sampler (and a number of other variants)
Full Code
Beneath is the whole code we’ve got defined within the article, you clone the code in our GitHub repo too.
Conclusion
On this article, we discovered the fundamentals of markov chain monte carlo, one particular methodology referred to as the Metropolis algorithm, the way to implement them utilizing PyMC3.
Subsequent Steps
Within the coming articles, we’ll additional talk about sampling methods resembling Metropolis-Hastings, Gibbs Sampling, and Hamiltonian Monte Carlo.
Really useful Programs
[ad_2]
Source link | http://openbootcamps.com/markov-chain-monte-carlo-simulation-for-airport-queuing-network/ | CC-MAIN-2021-25 | refinedweb | 2,274 | 50.77 |
Advertisement
In the previous blog, we took the Circle Detection, now we’re moving one step further and we’re going to learn about shape detection in OpenCV Python.
Let’s Start Coding Shape Detection in OpenCV Python.
The imports for this program will also be the same as the previous blog i.e import cv2, import NumPy, and also import matplotlib if you want to show the pictures in a grid format.
Shape Detection OpenCV Algorithm
First of all, read and store the image. For this example, I am taking an image that contains shapes like triangle, square, rectangle, and circle.
The image is then converted to grayscale using the
cvtColor() function.
Grayscaled image is then thresholded using the THRESH_BINARY Method. The Thresholded image is then taken and contours are found on that image.
The Contours obtained is then looped and the edges are counted using the
approxPolyDP() function, that takes each contour.
The edges of the Contour is then drawn on the image using
drawContours() function.
To write the name of the shape blocks of if-else are used that take decisions on the basis of the number of edges.
Example: If three edges are found the shape will be triangle.
In the case of square and rectangle aspect ratio between the width and height is calculated. If the ratio is close to 1 then the shape is square else rectangle.
If no edges edges are found then it is likely to be a circle.
def shapes(): img = cv.imread('./img/shapess.jpg') img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) _, thresh = cv.threshold(img_gray, 240, 255, cv.THRESH_BINARY) contours, _ = cv.findContours(thresh, cv.RETR_TREE, cv.CHAIN_APPROX_NONE) white = np.ones((img.shape[0], img.shape[1], 3)) for c in contours: approx = cv.approxPolyDP(c, 0.01*cv.arcLength(c, True), True) cv.drawContours(img, [approx], 0, (0, 255, 0), 5) x = approx.ravel()[0] y = approx.ravel()[1] - 5 if len(approx) == 3: cv.putText(img, "Triangle", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) elif len(approx) == 4: x1, y1, w, h = cv.boundingRect(approx) aspect_ratio = float(w) / float(h) print(aspect_ratio) if aspect_ratio >= 0.95 and aspect_ratio <= 1.05: cv.putText(img, "Square", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) else: cv.putText(img, "Rectangle", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) elif len(approx) == 5: cv.putText(img, "Pentagon", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) elif len(approx) == 10: cv.putText(img, "Star", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) else: cv.putText(img, "Circle", (x, y), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1) cv.imshow("Shapes", img) cv.waitKey() cv.destroyAllWindows()
Shape Detection Image
Learn more about OpenCV Projects, get all the source code from GitHub. | https://hackthedeveloper.com/shape-detection-opencv-python/ | CC-MAIN-2021-43 | refinedweb | 463 | 70.7 |
Home › Forums › Other › Can someone recommend a customizable contact form, please? › Reply To: Can someone recommend a customizable contact form, please?
Inactive
Greetings Traq,
I changed this:
return $this->_notice.str_replace( $placeholders,$replacements,$this->_htmlMarkup );
to this:
return str_replace( $placeholders,$replacements,$this->_htmlMarkup );
and added this on the same form.php after initially putting it on the wrong php:
public function htmlMarkup_userNotice(){ return ($this->_notice)? "<p>".htmlspecialchars( $this->_notice )."</p>": false;
I have added this to the contact page:
<div> <?= $contactForm->htmlMarkup( "" ); ?> </div>
I am not sure I have the above correct however.
I don’t believe I am getting this in the correct place either
<?= $contactForm->htmlMarkup_userNotice() ?>
as the form submits, but doesn’t direct to the success.php page.
Best Regards. | https://css-tricks.com/forums/reply/175549/ | CC-MAIN-2022-27 | refinedweb | 123 | 51.04 |
The Java Native Interface (JNI) allows you to call native functions from Java code, and vice versa. This example shows how to load and call a native function via JNI, it does not go into accessing Java methods and fields from native code using JNI functions.
Suppose you have a native library named
libjniexample.so in the
project/libs/<architecture> folder, and you want to call a function from the
JNITestJava class inside the
com.example.jniexample package.
In the JNITest class, declare the function like this:
public native int testJNIfunction(int a, int b);
In your native code, define the function like this:
#include <jni.h> JNIEXPORT jint JNICALL Java_com_example_jniexample_JNITest_testJNIfunction(JNIEnv *pEnv, jobject thiz, jint a, jint b) { return a + b; }
The
pEnv argument is a pointer to the JNI environment that you can pass to JNI functions to access methods and fields of Java objects and classes. The
thiz pointer is a
jobject reference to the Java object that the native method was called on (or the class if it is a static method).
In your Java code, in
JNITest, load the library like this:
static{ System.loadLibrary("jniexample"); }
Note the
lib at the start, and the
.so at the end of the filename are omitted.
Call the native function from Java like this:
JNITest test = new JNITest(); int c = test.testJNIfunction(3, 4); | https://www.notion.so/a6b64a910cf14d67bf673b2070bc15e0 | CC-MAIN-2022-27 | refinedweb | 226 | 60.14 |
[ Introduction]
Here is the second chapter excerpt from JavaSpaces Principles, Patterns, and Practice, recently published by Addison-Wesley as part of the Jini Technology Series from Sun Microsystems, Inc. (See Chapter 1 Introduction, for the first chapter excerpt from this book.
Note from the authors:
JavaSpaces is a powerful Jini service that provides a high-level tool for creating collaborative and distributed applications in Java. In the book JavaSpaces Principles, Patterns, and Practice, we cover some simple but powerful frameworks for JavaSpaces applications, including the command pattern. Using the command pattern, it's possible to implement a powerful compute server, consisting of a collection of worker processes each of which repeatedly picks up a task from the space (that was put there by a master process), calls its
execute method to compute the task, and moves on to the next task. The workers typically write the results of their task computations back to the space, so the master process can pick them up.
Chapter 11, excerpted here, reviews the command pattern (which is introduced in Chapter 6 of the book), and uses it to build a compute server. Then, we have some fun building upon that compute server to construct an application that breaks encrypted passwords. Eric Freeman and Susanne Hupfer
You can order this book from Amazon.com
In pioneer days, they used oxen for heavy pulling, and when one ox couldn't
budge a log they didn't try to grow a larger ox. We shouldn't be trying for bigger
computers, but for more systems of computers.
-- Grace Hopper
This chapter builds on the compute server we developed in Chapter 6 and explores its use in two directions: First, we are going to extend our simple compute server to make use of a couple of the technologies we covered in the last few chapters, namely transactions and leases. Then, we are going to develop a parallel application on top of the server that will allow us to explore some of the more subtle issues that arise when developing space-based parallel applications. These issues range from keeping the workers busy with tasks (yet not overwhelming the space with too many tasks), to the computation/communication ratio, to cleaning up the space after we use it.
The application we are going to develop is a fun one: breaking password encryption. Let's get started.
Before building our application we are first going to rework the simple compute server we developed in Chapter 6. At this point you might want to return to that chapter and quickly review the compute server code.
CommandInterface Revisited
Recall that all tasks computed by the compute server must implement the
Command interface, which contains one method,
execute. To submit a task to a compute server, we create a task entry that implements the command interface and drop it into the server's bag of tasks. A worker in the compute server eventually removes this task and invokes its
execute method. The
execute method computes a task and then optionally returns an entry, which the worker will write back into the space.
Below we present the
Command interface, with a couple of small changes:
public interface Command extends Entry { public Entry execute(JavaSpace space); }
First, note that the
execute method now takes a space as a parameter, which allows tasks to make use of the space within their code. For instance, a task may want to obtain additional data from the space, return intermediate results to the space, or communicate with other tasks.
We've also changed the return type of
execute to make it more general. Rather than returning an entry type of
ResultEntry, we return an
Entry so that tasks no longer have to subclass the
ResultEntry class.
TaskEntry
While we've gotten rid of
ResultEntry, we still have a base
TaskEntry class, which we first introduced in Chapter 6. Here is our updated
TaskEntry:
The changes to this class are fairly minimal. The
execute method has been updated to reflect the new space parameter specified in the
Command interface, and also to return an
Entry rather than a
ResultEntry. The
execute still throws a runtime exception, thus forcing subclasses to implement the method. Recall the reason we gave in Section 6.2.1 for defining
TaskEntry in this manner (rather than defining an abstract class): It's so the compute server can create a
TaskEntry template and remove any task from a space. If
TaskEntry where declared
abstract, then we couldn't use it to instantiate a template.
We've also added one new method,
resultLeaseTime, that is used to supply a lease time for the result entries that are returned from the
execute method (and then written back into the space by the worker). The implementor of the
TaskEntry can override this method to specify a time other than the default for the results of the computation to remain in the space.
Now that we've gotten these minor differences out of the way, let's move on to more interesting changes, starting by reworking the generic worker.
To review, the basic operation of the worker is straightforward: In a loop we take a task entry from the space and invoke its
execute method. If the method returns an entry then we write it back into the space (for the master to pick up). We then return to the top of the loop and begin again.
Our reworked version includes a few enhancements. Here is the code for the generic worker's
run method:
The first enhancement is the use of
snapshot. Since we use the same task template over and over, we go ahead and create a snapshotted version of the template to avoid re-serialization on every
take operation.
We have also improved the generic worker so that it takes into account the possibility of partial failure. Let's first step back and examine one of several scenarios that can occur with partial failure: consider the case where the generic worker removes a task, starts computing it, and then partial failure occurs. As a result, the task entry will be lost, which may have a detrimental effect on a parallel computation.
To make our compute server robust in the face of partial failure, we use transactions. To create a transaction we make use of the method
getTransaction, which is shown below:
Here we first obtain access to a transaction manager, which is done through our
TransactionManagerAccessor. We then call
TransactionFactory.create, passing it a transaction manager and a lease time of ten minutes. This lease time has the following effect on the task entry: If a task is not computed within ten minutes then the transaction is aborted by the transaction manager and the
TaskEntry is returned to the space (note that our code does nothing to stop the
execute method if it does not finish in that time period; we leave this as an exercise to the reader).
If an exception occurs creating the transaction, then the exception is printed and
null is returned from the call to
getTransaction; otherwise we return the transaction.
Returning to the generic worker's
run method, when the call to
getTransaction returns, if a transaction hasn't been created we throw a runtime exception. Otherwise, we call
take (passing it our snapshotted template and transaction) and wait indefinitely until it returns a task entry. Once a task entry is returned we call its
execute method and assign the return value to the local variable
result. If
result is non-
null, then we write
result back into the space under the transaction, with a lease time obtained from calling
resultLeaseTime. By overriding the
resultLeaseTime method, the designer of a
TaskEntry can specify how long a result entry should live on a per-task basis. Finally, once the result is written back into the space, we commit the transaction.
If any exceptions occur along the way we display the exception and then continue at the top of the loop. If however, we receive an
InterruptedException, then we explicitly abort the transaction, assuming the user has interrupted the computation.
We are also going to make a few updates to our
GenericMaster class. Recall that our previous version from Chapter 6 called the
generateTasks method to generate a set of tasks and then called
collectResults to collect any results. To create your own master process you subclass these two methods and supply the code to generate tasks and collect results.
In practice it is often the case that generating tasks and collecting results are two activities that can occur concurrently. Many parallel (as well as distributed) applications generate small sets of tasks in phases over time and collect the results from each phase as they become available. The results from one phase may, in fact, influence the set of tasks that are generated in the next. It may also be the case that a master needs to generate so many tasks that the space would be overwhelmed (or at least resource constrained) in a shared environment; in such a case the master needs to dole out tasks over time, as previous tasks are completed, and not all at once. That will be the case with our application, and we will talk about a technique called watermarking that allows us to keep workers busy, while minimizing the impact on the space.
Here is the new code for our updated
GenericMaster, which is designed to allow the
generateTasks and
collectResults methods to run concurrently:
To allow both methods to work concurrently, we've made a simple alteration to the
GenericMaster class; rather than successively calling the two methods, it instead creates two new threads: one that calls
generateTasks and the other that calls
collectResults.
That concludes our changes to the compute server.
We are now going to develop a parallel application on top of our compute server. Our application is an exercise in applied cryptography, which is another way of saying we are going to write an application that breaks encrypted passwords.
A common method of encrypting passwords, particularly in UNIX systems, is the use of a one-way encryption algorithm, which takes the user's password as input and returns the user's encrypted password. For instance, a password of "
secret" may result in the output "
BgU8DFSLhhz6Q." One-way functions that work well for encryption have the property that, given the encrypted version of a password, it is difficult to compute the inverse of the function to produce the original password (which makes it hard to guess passwords from their encrypted form).
The parallel application we are going to be developing will break passwords through the brute force technique of trying every possible combination of ASCII characters that make up a password. This technique works as follows: given a one-way function, let's call it
crypt, and an encrypted password
e, we can iterate through every possible ASCII password
p and compute
crypt(p). We then compare the result of
crypt(p) with
e, and if the two are equal, then
p is the password. If not, then we keep trying.
It isn't difficult to enumerate every possible ASCII password; it can be done by generating a sequence like this:
with the caveat that characters other than
a-
z can be used for passwords (in general, all 95 printable ASCII characters can be used).
There are, of course, other methods of trying to break passwords. The most common method is to use a dictionary of words that are then encrypted and checked against the stored encrypted password. This technique often works well in practice, because users often choose common words as passwords. However, it isn't foolproof, because users may choose passwords that aren't in dictionaries.
Exploring the space of every possible password--not just dictionary words--can be a very compute-intensive problem (if not intractable in some cases). Because of this we are going to implement a scaled-down application that breaks passwords of four characters or less. Even with four characters, the space of possible passwords would take a standalone Java application on the order of several hours to compute, which makes a nice size for our parallel experiments.
Our encrypted passwords are based on the UNIX password encryption scheme. The garden-variety UNIX algorithm normally takes a password of eight characters that is prepended by two characters of "salt"--an arbitrary sequence that is prepended to prohibit simple dictionary schemes of password breaking. So to encrypt "secret," UNIX adds two random characters, say "aw" and then encrypts "awsecret" via a one-way function.
The UNIX
crypt one-way function is provided in a class called
JCrypt, written by John F. Dumas for the Java platform, and can be found in the source code for this book. The details of the class are not important, with the exception of the
crypt method that we make use of in our code. The signature for this method is as follows:
public static String crypt( byte[] password);
The
crypt method is a static method that takes a password (already prepended with salt) in the form of a byte array and returns an encrypted version of the password in the form of a
String.
With this knowledge under our belt, let's design and build the application. Our application is going to follow the standard replicated-worker pattern: Given an encrypted password, a master process enumerates all possible passwords and generates tasks for the workers to verify whether or not the possible passwords match the encrypted version. The workers pick up the tasks and do the verification by the technique mentioned above--they first encrypt the potential password and then compare it against the encrypted password. If the two match we've found the password.
The task entry is the heart of the compute server computation--tasks are written into the space for the
GenericWorker to compute. Because we make use of the command pattern, the
GenericWorker simply calls each task's
execute method. Even if a worker has never seen a specific subclass of a task entry before, the space and its underlying RMI transport take care of the details of shipping the executable behavior to the worker.
Our workers look for tasks of type
TaskEntry. Here we extend the task entry to create a new task: the
CryptTask.
Let's step through the
CryptTask code. A crypt task holds three pieces of information:
tries, an integer that specifies the number of potential passwords each task should enumerate and try as a possible match against the encrypted version;
word, a byte array that holds the word with which to begin the enumeration; and
encrypted, a string that holds the encrypted password we are trying to break. So each task is given a starting point (
word) and a number of enumerations (
tries), to attempt matching words against the encrypted password (
encrypted). All three values can be initialized in a crypt task by calling the supplied convenience constructor.
Next we have the crypt task's
execute method: we first convert
tries from its wrapper class into a primitive
integer that is assigned to
num. The rest of
execute consists of one main loop that iterates for
num times. Each time through the loop, we encrypt a word using the
JCrypt class's
crypt method and then compare it against the
encrypted word. If the two are equal, then we've broken the password, and we create a
CryptResult object and return it to the
Worker process. The
CryptResult object is simply an entry that holds the broken password in the form of a byte array.
If the two are not equal, then we call
nextWord, which is shown below:
The
nextWord method is a static method (as we will see it is also used by the
CryptMaster) that takes a word in the form of a byte array and alters the array so that it is set to the next word in the ASCII sequence. For instance, if we start at the word "aaaa" and prepend it with the salt sequence "aw" then the word "awaaaa" looks like this if we run it through
nextWord a few times:
If we take a look at the code for
nextWord, we see that the method first increments the right-most character. If this character goes beyond the end of the ASCII sequence (which is the hexadecimal number
0x80) then the character is set to
! (which is the first printable character in the ASCII set), and the neighboring character to the left is incremented by one. The test for
0x80 is then repeated on that character; if equal, the character is also set to
! and the character to the left of it is incremented. And so on.
If
CryptTask iterates through
num possible passwords without finding a match, then it falls out of the loop and returns a result entry that has its
word field set to
null (the result entry consists of one field,
word, that is used to hold the password when it is found).
To recap, the
CryptTask has instructions to check a fixed number of potential passwords against the encrypted password. If a password is found, it is returned in a result entry, but if no password is found, then a result entry is still returned with its
word field set to
null (we will see why shortly).
Now that we've seen the worker code in detail, let's take a look at the
CryptMaster, which drives the entire computation. Let's first examine the user interface of the
CryptMaster (shown in Figure 11.1), which will give us a sense of its operation before we dive into the code details.
Figure 11.1 - The CryptMaster Interface 11.1
Once started, the
CryptMaster encrypts our password and displays it just below the password text entry field. The compute server will now be asked to figure out what our original password was, given the encrypted version. To do this, the crypt master begins generating tasks and collecting results until the workers break the password. During the computation, the right side of the interface provides an information display that shows the total number of words that have been checked against the encrypted password, the average number of words being computed per minute by all workers, and the number of tasks in the space waiting to be computed (this is called the "water level," which we will discuss shortly along with the "high mark" and "low mark" configuration information that appears in the interface).
Below is the skeleton of the
CryptMaster code:
CryptMaster subclasses our
GenericMaster class; this basic skeleton shows where fields are declared and initialized and where the user interface is set up.
The notable part of this code is the
actionPerformed method, which is called when the "Start" button is clicked. This method first checks to see if the
start field has already been set to
true (indicating that this method has been called before), and if so simply returns. Otherwise we check the value of the fields in the user interface to make sure they contain appropriate values. If they do, we set the value of several fields, including the
start field; otherwise we display an error message in the status area of the applet.
The
CryptMaster interface works as follows: we enter salt and a password (in the first two text entries in the upper left corner of the interface), and then optionally tweak the parameters below the password text entry, such as the "tries per task" parameter, which controls how many words each task checks against the encrypted password. We then click on the "Start" button, which starts the application.
Now that our user interface code is out of the way, let's look at how
CryptMaster generates and collects tasks. Recall that
GenericMaster (which
CryptMaster subclasses) creates two threads, one that calls
generateTasks and another that calls
collectResults. Here is the code for
generateTasks:
The
generateTasks method begins by waiting for the
start field to become
true. As we saw in the last section, this field is set to
true when the user clicks on the "Start" button.
Once
start is set to
true, the
generateTasks method sets the
startTime field to the current time and the
encrypted field to the encrypted version of the password entered in the text entry field (prepended with salt); then the interface is updated to display the encrypted version. Next the method
getFirstWord is called to obtain the first ASCII sequence (see the complete source code for the method definition), which gets assigned to the field
testWord; this will be the starting point for enumerating all potential password matches.
Then we enter the main loop of
generateTasks, which first tests to see if we are at the last word of the ASCII sequence. Recall from our explanation of the
CryptTask's
nextWord method that the sequence is enumerated by "incrementing" through the positions of the word from right to left. Here we check the first character of the password's salt. When this character is no longer the salt character (in other words, when it, too, has been incremented) then we know we've exhausted all password possibilities.
Next we create a
CryptTask by calling its convenience constructor, supplying the number of tries per task, the encrypted version of the password, and the starting word to begin testing. We then write the crypt task to the space. Next we need to update
testWord, so that the next task will begin at the word which is
triesPerTask away from this task's starting word. To do this we simply call the
nextWord method
triesPerTask times (a very fast operation compared to doing the password check of each possibility).
That is the basic version of the
generateTasks method, which simply enumerates all possible values and partitions them into a number of tasks that need to be checked by workers. The task generation is problematic, however, in that it can generate a very large number of tasks before the password is discovered. In fact, there are over 81 million possible passwords of length four or less. If we choose a "tries per task" of 1,000 then the master will (in the worst case) generate over 80,000 tasks. This could have a serious impact on the resources of the space. Let's look at one way to improve this code.
How many passwords are possible? For a four-character password we need to check just over 81 million potential passwords. This number comes from the following computation: a password has four character "slots" to be filled, each of which can hold one of 95 possible ASCII characters (we will assume that passwords of 3 characters or less are filled with the space character at the end). Because the choice of each slot is independent of the others we obtain all possibilities by computing 95*95*95*95 = 95^4 = 81,450,625.
How many eight-character passwords are possible?
One observation we can make is that the space needs to hold only enough tasks to keep workers busy. A technique for keeping a space full enough, yet not too full, is called watermarking. The word watermarking comes from the two marks often found on shorelines marking high and low tide. Here we choose a high point and low point, meaning the upper and lower limits on the number of entries we'd like in the space. We fill the space with entries until it reaches the high mark, and then wait until the number of entries falls below the low mark, at which point we start filling it with tasks again.
As an illustration of how watermarking works, assume you have a set of ten workers at your disposal. You might want to set the high and low marks to, say, twelve and twenty respectively. That way you always have at least enough tasks for all workers but at most twenty entries in the space. Tuning watermarks is more of an art than a science. In our case we are going to assume that the high and low marks are set when the computation begins and never change. Often the number of workers varies over time, and the high and low values could be modified adaptively - but we won't get that sophisticated here.
Let's apply the watermarking principle to our
generateTasks code. Here is a new version of the
generateTask's main loop, updated to use watermarking:
And here are the definitions of a couple helper methods that are used:
Our first addition to the main loop is the call to
waitForLowWaterMark, which causes the method to wait until the water level is equal to our low watermark (which is specified in the user interface). This is determined by the
waterlevel field, which is initially set to zero in its declaration.
Next we've inserted an additional loop, which continues as long as the water level is less than the high watermark. For each task we write into the space we increase the water level by one, until it exceeds the high watermark. As we will see in the next section, as the results are collected this watermark is decremented. Only when the low watermark is reached do we begin adding more tasks.
Now let's look at how results are collected. The code for
collectResults is given below:
In the
collectResults method we first declare a local variable
count, which will be used to keep a count of the number of tasks that have been computed. We then create a snapshotted version of the
CryptResult entry, which will be used as a template to take results from the space. Once again, the result template does not vary when it is used, so snapshotting the entry is better than reserializing the entry each time we call
take.
Next we enter the main loop of the method; in this loop we first take a result entry from the space, waiting as long as necessary. Once we have retrieved a result entry, we increment the count variable and update the information display of the user interface by printing the total number of words that have been tried against the encrypted password (computed by
count * triesPerTask). We then update the performance area of the information display by calling
updatePerformance with the total number of word tries as a parameter. The code for
updatePerformance is given as follows:
First we get the current time and assign it to
now. Then we compute the average number words computed per second, which is
wordsTried divided by the number of seconds since CryptMaster began running (which is
((now - startTime)/1000)). This gives us an average number of words computed per second. We then set the appropriate label in the information display.
Now, returning to
collectResults, after updating the display we decrement the water level, since we know a task has been removed and computed. We then check to see if the result entry we retrieved has the cracked password in it. If
result.word is non-
null, then we have the password and we print it in the information display.
At this point we are finished, but before we return from
collectResults, we first call
addPoison, which we describe in the next section.
The
addPoison method allows us to clean up the space, which is needed because, when a valid password is found, the space may be left with task entries that no longer need to be computed. There may also be result entries that need to be cleaned up as well. In the latter case, we can expect leases to take care of removing the results. However, the disadvantage of leaving the task entries around is that the workers will continue to compute them, which would be a waste of resources in a shared compute server environment.
One approach to cleaning up task entries is to have the master collect the remaining tasks until none remain. A better approach is to let the workers clean up the tasks themselves. This can be done using a variant of a barrier (see Chapter 4), which is referred to as a poison pill. Poison pills work like this: When a task's
execute method is called by the worker, the task checks to see if a poison pill exists in the space. If one exists, then the task simply returns (and is therefore not computed), but if no pill exists, the task is computed.
To implement a poison pill we first create a poison entry:
public class PoisonPill implements Entry { public PoisonPill() { } }
As you can see the
PoisonPill is a simple entry, whose only purpose is to act as a barrier in the space.
To make use of the poison pill we also need to alter the
execute method of our
CryptTask entry:
Here we first look for a poison pill in the space using
readIfExists. If one exists then we skip the computation and return
null. Otherwise, we compute the entry as normal. When the poison pill exists in the space, it has a flushing effect on the remaining task entries: Whenever a worker takes a task and calls execute, a poison pill is found and the method returns immediately, so tasks are removed from the space instead of being computed.
Finally, our application is complete. Now it is time to fire it up and experiment. As with the previous version of the compute server we first start up one or more generic workers and then start up the
CryptMaster applet. At this point you should get a feel for how quickly the words are computed using one machine. If you are using a garden variety machine, then you should expect to see thousands of words processed per second.
With that benchmark behind you, begin to experiment with adding more machines. Pay attention to how the word rate increases as processors are added to the computation.
At this point you may want to tweak the "tries per task" parameter. If you collect data across a number of runs, you should see the effect of computation versus communication. As the tries-per-task parameter is raised, computation plays a larger role in the runtime behavior; as it is lowered, communication plays a larger role.
If you have access to a large number of machines, you will also want to study the speedup gained with each new processor. For a reasonable number of processors and a large enough "tries per task" you should expect to see perfect speedup. Perfect speedup means that if you use n processors you will compute a problem n times as fast as one processor. However as more and more processors are added, the effect may eventually level off, given contention for the space resource.
In this chapter we've explored not only the mechanics of creating a compute server, but also some of the subtle issues that arise in building parallel applications. As we've seen, creating a replicated-worker application isn't as easy as just generating tasks, dropping them into a space, and letting workers compute them. Often we need to consider the granularity of the tasks as well as the resource constraints of the space in a shared environment.
1 As used on this web site, the terms "Java virtual machine" or "JVM" mean a virtual machine for the Java platform.
Eric Freeman is co-founder and CTO of Mirror Worlds Technologies, a Java and Jini-based software company. Dr. Freeman previously worked at Yale University on space-based systems, and is a Fellow at Yale's Center for Internet Studies. Susanne Hupfer is Director of Product Development for Mirror Worlds Technologies and a fellow of the Yale University Center for Internet Studies. Dr. Hupfer previously taught Java network programming as an Assistant Professor of Computer Science at Trinity College. Ken Arnold is the lead engineer of the JavaSpaces product at Sun. He is one of the original architects of the Jini platform and is co-author of The Java Programming Language, Second Edition. | http://www.oracle.com/technetwork/java/chapter11-138585.html | CC-MAIN-2014-41 | refinedweb | 5,359 | 56.89 |
NAME
uselib - load shared library
SYNOPSIS
#include <unistd.h> int uselib(const char *library);
DESCRIPTION
The system call uselib() serves to load a shared library to be used by the calling process. It is given a pathname. The address where to load is found in the library itself. The library can have any recognized binary format.
RETURN VALUE
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS limit on the total number of open files has been reached. ENOEXEC The file specified by library is not an executable of known type, e.g., does not have the correct magic numbers.
CONFORMING TO
uselib() is Linux-specific, and should not be used in programs intended to be portable.
NOTES.
SEE ALSO
ar(1), gcc(1), ld(1), ldd(1), mmap(2), open(2), dlopen(3), capabilities(7), ld.so(8)
COLOPHON
This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/intrepid/man2/uselib.2.html | CC-MAIN-2015-11 | refinedweb | 175 | 69.18 |
13 December 2011 19:23 [Source: ICIS news]
LONDON (ICIS)--Shell has agreed to lease its base oil plant and some facilities at its refining site in ?xml:namespace>
Nynas plans to rebuild the Hamburg refining site into a 330,000 tonne/year specialty oils refinery by around mid-2014, it said.
Nynas and its law firm, Clifford Chance, confirmed the deal in separate statements. Officials at Shell’s regional offices in
“The [Hamburg-Harburg] refinery will continue to produce as today but will over the next 24 months be converted into a stand-alone specialty oil refinery,” said Nynas president Staffan Lennstrom.
“A new hydrogen unit and an extensive conversion programme will transform the premises into a world class stand-alone naphthenic specialty products refinery,” Lennstrom said.
Financial terms were not disclosed.
Nynas expects to employ about 220 of Shell's workforce in
The deal is subject to approval by EU competition authorities.
Shell had announced plans to dispose of the 5.5m tonne/year refinery at Hamburg-Harburg back in 2009 as part of its strategy to focus on its larger refineries, such a its facility at Wesseling-Godorf near Cologne, Germany.
After failing to find a buyer for the refinery, Shell said earlier this year it planned to run the refinery until March 2013, sell the base oil assets, and convert the remaining site into a terminal for oil products.
Last year, Shell already sold its 4.5m tonne/year refinery at Heide, near | http://www.icis.com/Articles/2011/12/13/9516281/shell-to-lease-hamburg-base-oil-plant-to-swedens-nynas-for-25-yrs.html | CC-MAIN-2014-52 | refinedweb | 247 | 51.99 |
Life has become very easy for java programmers working on multi-threaded applications after release of JDK 5. JDK 5 brought many features related to multi-threaded processing which were kind of nightmare for application developers, and even worse for those who had to debug this code in future for bug fixing. Sometimes, this resulted in deadlock situations as well.
In this post, i will suggest to use such a new feature ThreadPoolExecutor in combination with BlockingQueue. I will let you know the best practices to use above classes in your application.
Sections in this post:
- Introducing DemoTask
- Adding CustomThreadPoolExecutor
- Explaining BlockingQueue
- Explaining RejectedExecutionHandler
- Testing our code
Introducing DemoTask
I will not take much time here as our DemoTask is just another test thread written to support our logic and code.
package corejava.thread; public class DemoThread implements Runnable { private String name = null; public DemoThread(String name) { this.name = name; } public String getName() { return this.name; } @Override public void run() { try { Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Executing : " + name); } }
Adding CustomThreadPoolExecutor
This is important. Our CustomThreadPoolExecutor is extension of ThreadPoolExecutor. Even without extending the ThreadPoolExecutor, simply creating its instance and using it, will also work correctly. But, we will miss some extremely useful features in terms of control of execution.
ThreadPoolExecutor provides two excellent methods which i will highly recommend to override i.e. beforeExecute() and afterExecute() methods. They provide very good handle on execution life cycle of runnables to be executed. Lets see above methods inside our CustomThreadPoolExecutor.
package corejava.thread; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class CustomThreadPoolExecutor extends ThreadPoolExecutor { public CustomThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) { super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue); } @Override protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); System.out.println("Perform beforeExecute() logic"); } @Override protected void afterExecute(Runnable r, Throwable t) { super.afterExecute(r, t); if (t != null) { System.out.println("Perform exception handler logic"); } System.out.println("Perform afterExecute() logic"); } }
Explaining BlockingQueue
If you remember solving the producer-consumer problem, before JDK 5, consumer had to wait until producer put something in resource queue. This problem can be easily solved using new BlockingQueue.
BlockingQueue is like another Queue implementations with additional capabilities. Any attempt, to retrieve something out of it, can be seen safe as it will not return empty handed. Consumer thread will automatically wait until BlockingQueue is not populated with some data. Once it fills, thread will consume the resource.
BlockingQueue works on following rules:
-.
Explaining RejectedExecutionHandler
So the danger is, a task can be rejected as well. We need to have something in place to resolve this situation because no one would like to miss any single job in his application.
Can we do something about it? Yes, we can…[Borrowed from Obama]
BlockingQueue in case of rejection throws RejectedExectionException, we can add a handler for it.
Adding RejectedExecutionHandler is considered a good practice when using new concurrent APIs.
Testing our code
I am done with talking and its time to see if actually, what i said, works? Lets write a test case.
We have some 100 tasks. We want to run them using ideally 10, and maximum 20 threads. I am trying to write code as below. You might write it better or you have this solution.
package corejava.thread; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class DemoExecutor { public static void main(String[] args) { Integer threadCounter = 0; BlockingQueue<Runnable> blockingQueue = new ArrayBlockingQueue<Runnable>(50); CustomThreadPoolExecutor executor = new CustomThreadPoolExecutor(10, 20, 5000, TimeUnit.MILLISECONDS, blockingQueue); executor.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { System.out.println("DemoTask Rejected : " + ((DemoThread) r).getName()); System.out.println("Waiting for a second !!"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Lets add another time : " + ((DemoThread) r).getName()); executor.execute(r); } }); // Let start all core threads initially executor.prestartAllCoreThreads(); while (true) { threadCounter++; // Adding threads one by one System.out.println("Adding DemoTask : " + threadCounter); executor.execute(new DemoThread(threadCounter.toString())); if (threadCounter == 100) break; } } }
Execute above code and you will see the result is as desired and performance is also good.
I hope i am able to make a point here. Please let me know of your thoughts also.
Happy Learning !!
31 thoughts on “How to use BlockingQueue and ThreadPoolExecutor in java”
hi Lokesh,
Nice tutorial!!!!
When using ThreadPoolExecutor, Suppose if some thread is blocked, and i want to terminate all active threads and restart the ThreadPoolExecutor.
Is it possible to terminate all active threads and start creating threads again to perform the task.
Could you please explain. Thanks in advance..:)
This line giving error :
BlockingQueue blockingQueue = new ArrayBlockingQueue(
50);
Yes, this line gives an error.
I am using this constructor.
Can i use LinkedBlockingQueue like LinkedBlockingQueue logQueue = new LinkedBlockingQueue(QUEUE_CAPACITY)?
I do not see any problem as far as it is implementation of BlockingQueue.
How to maintain the thread execution order? I want threads to be executed in the order they are created.
Two thoughts come up immediately in my mind.
1) If you want ordering, then no need to multi-thread your application. Multi-threading is inherently un-ordered and for parallel execution.
2) If you still demand ordering based on object counter, doesn’t Queue process them sequentially in order they were added? Yes it does.
Am I missing anything, Guys?
I think they are not executing in the same order as they are created. For example, in my first run of the above code, thread 5 got executed before thread 1
Hi Lokesh, Thanks for the beautiful explanation!!
blockingQueue has never been used, why have you declared in first place?
I passed it into CustomThreadPoolExecutor’s constructor.
Your main program never terminates.
Yes, it’s expected and intended behavior.
I have a scenario where I need a module based scheduling.. Probably I will consider separate BLOCKING QUEUE for each module for handling different their implementation. Do you think that creating a Map for module name to its Blocking queue would be the right solution ?
Seems ok to me. Basically you would like to hide it behind a factory kind of wrapper. It will help in replacing the logic if not proved correct in future.
Is there any way that I can override getTask() method of threadpool executor ?. Because I need to implement my own logic to get next task.
Thanks,
Dimal
NO. You can not. Reason is simple in two aspects. First the method getTask() is private so you cannot override. Second, this method plays a critical part in fetching the next executable/runnable from queue. If you are changing the logic here, then the whole idea of using a Queue falls apart. Queues use a fixed insertion/retrieval ordering and any program should honor this default behavior.
Regarding your own logic to get next task, I will suggest you to research more on PriorityBlockingQueue concept. You can add a comparator to priority queue and it will re-arrange (theoretically i suppose, not tested myself) the runnables in queue and whenever executor ask for new task, queue can provide the next task as per your logic in comparator.
Again, I have not tested it myself but this should be your area of research.
Thanks Lokesh for your valuable feedback. I have a question about the second point that you mentioned, why do we need a complex method to fetch next executable from queue? . So you are saying if I have my own implementation for getTask(), I no need to extends ThreadPoolExecutor.
Thanks,
Dimal
Hi,
Is there a possibility that I can reduce the number of threads of executor service once the threads are started? executor.shutdown() will terminate all the threads, but can I terminate one of the thread which was started by executor service?
Thanks,
Bala
Hi,
I am facing a problem in ThreadPoolExecutor, i am using ArrayBlockingQueue to queue the tasks. CorePoolSize is 50, maximum pool size is MAX value of integer. When I submit requests and number of threads reaches corePoolSize then it queue up the request in the ArrayBlockingQueue but never execute the run() method of the submitted requests. Could you please let me know what could be reason for this and possible solution. I’m facing this problem due to which my application is getting timeout as it does not get the response.
Thanks
AK
Because your tasks are not getting executed. Try to put some log statements OR sysout statements to check that even a single task was executed.
Then check if they are not being rejected. Add RejectedExecutionHandler.
If still struck, paste the code here. I will try to get you a solution.
I still don’t see why new tasks should be rejected if the queue is full. If new tasks are rejected when the queue is full, the purpose of using bounded queue is lost. The purpose of the bounded queue is that the put() method is blocked when the queue is full. It seems the ThreadPoolExecutor is not using this feature.
Therefore, I think this is a design fault in ThreadPoolExecutor.
Hmmm, it’s good point. But I do not completely agree with you. Reason is that this is well documented behavior. Please refer to bounded queue and unbounded queue sections.
It is desired in many cases to prevent resource exhaustion. Otherwise waiting queue can keep growing and eat up whole system resources.
New tasks submitted in method execute(java.lang.Runnable) will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated
100% correct. You are right.
Try using the CallerRunsPolicy:
From ThreadPoolExecutor JavaDoc:
“In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.”
It also ensures that tasks will never be discarded.
It’s good point.
Using RejectedExecutionHandler is good concept. Liked it
The RejectedExecutionHandler has one approach problem. When a very large number of executions come in at a very short time (lets say the implementation is a kind of server) the method may encounter a stack overflow as rejected calls execution which rejects the execution back to rejected and so on eventually reaching overflow. I write this from experience using such an implementation in high volume transaction system. Thinking of a different solution as I can’t loose executions.
Pretty valid point. Any suggestion what you would like to do in this case? One solution may be increasing the number of size of thread pool or queue size, but it has it’s own disadvantages in resource critical systems. Another approach may be to let tasks reject and log them somewhere to get a report in future.
Note:- In comment box, please put your code inside [java] ... [/java] OR [xml] ... [/xml] tags otherwise it may not appear as intended. | http://howtodoinjava.com/2012/10/20/how-to-use-blockingqueue-and-threadpoolexecutor-in-java/ | CC-MAIN-2014-52 | refinedweb | 1,852 | 51.14 |
Writing Code in LightSwitch
When you write code for your application, you will use the Code Editor. The code that you write in a LightSwitch application will mostly be in built-in methods, that is, methods of entities, screens, and queries. For example, every screen has a <ScreenName>_CanRun () method, where <ScreenName> is the name of the screen. You would typically write code in this method to check whether a user has permission to see the screen. For more information about how to write code in methods, see How to: Handle Data Events, How to: Handle Screen Events, or How to: Handle Query Events
For more advanced scenarios, you might write code that uses the LightSwitch object model. For example, you might write code that uses the data model to handle concurrency issues that occur when saving data. For more information, see Performing Data-Related Tasks by Using Code
You can use either the Visual Basic or C# programming language. Both are equally capable. It is a matter of personal choice. You cannot mix Visual Basic and C# code in a single project, and you must make the choice when you create the project.
More than just a text editor, the Code Editor uses a technology known as IntelliSense to help you write code by providing relevant information as you type. There are several features of IntelliSense that can make your coding tasks easier. These features include List Members, Parameter Info, Quick Info, Complete Word, and Syntax Tips.
List Members
When you type the name of a type or namespace in the Code Editor, a list of all the valid methods, properties, and events appears in a drop-down list. An example of code written in a method that displays the list members is shown in the following illustration.
You can scroll through the list or type the first few letters of the member to move to that member in the list. Then press ENTER to add that member to your code.
Parameter Info
When a method takes parameters, IntelliSense displays information about the parameters, such as the type of parameter, the name, and the number of parameters required. If a function is overloaded, you will see UP and DOWN arrows that let you to scroll through all the function overloads, as shown in the following illustration.
As you type the parameter, the list displays the next parameter in bold font.
Quick Info
You can display the complete declaration of an identifier in your code by holding your mouse pointer over the identifier. The following illustration shows the Quick Info box that appears. filter the words in the list, and then press ALT+RIGHT ARROW to complete the word.
The following illustration shows an example of the completion list that appears when you type code in the Code Editor.
Additional IntelliSense Features
Keyword IntelliSense lets easier to find the item you are looking for. If you do want to see the complete list, you can press CTRL+J. When you start to type again, the list will again become filtered. | http://msdn.microsoft.com/en-US/library/ff852038(v=vs.100).aspx | CC-MAIN-2014-23 | refinedweb | 510 | 60.95 |
Development/Tasks/Packaging/Policies/Initscripts
From Mandriva Community Wiki
Daemon processes often need an initscript to be started at boot time. In order to ease the integration and the management of the service, you need to follow some rules.
Initscript usage
To manage initscripts, you should use service and chkconfig.
service
service is a shell script used to start and stop services, either standalone or run with xinetd. Usage is simple, just use
service <name_of_service> <actiont>
where action can be one of start, stop, status, restart, reload (same as restart if the package doesn't support on-the-fly reloading) or anything the script can support.
Chkconfig
Chkconfig is used to manage the /etc/rc?.d/* and xinetd config files. It removes and adds symlinks in order to start or stop a service at boot for a given runlevel. You should use it to add your script to the system. Macros are provided to wrap the call to chkconfig.
An example
Here is a generic example. You can find others in /etc/init.d/. It should be quite easy to adapt, as you only need to change the variable DAEMON_NAME.
#!/bin/sh # ### BEGIN INIT INFO # Provides: some_daemon # Required-Start: $network # Required-Stop: $network # Default-Start: 3 4 5 # Short-Description: nothing # Description: some_daemon is nothing. # Really, nothing. ### END INIT INFO # } stop() { echo -n "Shutting down $DAEMON_NAME: " killproc $DAEMON_PROCESS RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $LOCK_FILE } reload() { echo -n "Reloading $DAEMON_NAME configuration: " killproc $DAEMON_PROCESS SIGHUP RETVAL=$? echo } case "$1" in start) start ;; stop) stop ;; status) status $DAEMON_PROCESS RETVAL=$? ;; reload) reload ;; restart) stop start ;; condrestart) if [ -f $LOCK_FILE ]; then stop start fi ;; *) echo "Usage: $0 {start|stop|restart|reload|condrestart|status}" RETVAL=1 esac exit $RETVAL
The header
The initscript header should use LSB tags ( chkconfig headers are deprecated and should not be used anymore). See Initscript Header Format for more details.
First part - Configuration
#
The first part deals with utility functions provided by /etc/rc.d/init.d/functions/.
The second part deals with useful script global variables. $DAEMON_NAME is used for providing user feedback, $DAEMON_BINARY is used to actually launch the service, and $DAEMON_PROCESS to communicate with it once launched. Depending of the service, they may be different or identical. RETVAL is used for action result status.
The third part deals with service configuration. If the service accepts some options, you should place a configuration file in /etc/sysconfig/. In order to ease the administrator duty and prevent namespace clashes, it should have the same name as the service. Placing default values in the script and in the config file is more robust than default value in the config file only.
Second part - Primitives
Using functions for defining primitives, such as start and stop allows to restart the service without reinvocating the script itself. Each of these function should set RETVAL.
A typical script needs the following functions: }
This function is used to start the service. You have to take care of the lock before launching, to avoid having several instances running simultaneously. The "daemon" function takes care of everything (using libsafe, creating a pidfile, etc.), and should return OK only if the startup succedeed. However, some daemons do not crash at the beginning, like older versions of openldap, and still show OK even if they do not work.
stop() { echo -n "Shutting down $DAEMON_NAME: " killproc $DAEMON_PROCESS RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $LOCK_FILE }
This function is used to stop the service. The "killproc" function takes care of sending SIGSTOP (or any other given signal) to the process. Then you have to remove the lock file.
reload() { echo -n "Reloading $DAEMON_NAME configuration: " killproc $DAEMON_PROCESS SIGHUP echo }
This function is used to reload the service, if the daemon supports it.
Last part - Handling arguments
case "$1" in ..... esac exit $RETVAL
A typical script accepts the following actions:
start
Call start() function.
stop
Call stop() function.
status
Call status() function.
restart
restart) stop start ;;
This action simply call stop and start, without checking if the service is currently running.
condrestart
condrestart) if [ -f $LOCK_FILE ]; then stop start fi ;;
This action call stop and start, but only if the service is currently running.
reload
Just call reload() functions, if present. Otherwise, you may as well handle "reload" as "restart"
default action
*) echo "Usage: $0 {start|stop|restart|reload|condrestart|status}" RETVAL=1
If nothing else matches the action specified, the script should show an error message, and return an exitvalue of 1 to show there is an error. If you add some actions do not forget to add them in the error message. Bash completion uses a regexp matching "Usage: . {.'}", so do not change the format of the messages.
The daemon function
The "daemon" function is defined in /etc/init.d/functions. Right now, there are some arguments to pass:
- --user <uid>: to run the daemon under the specified UID
- --check <name>: to define the name of file with the pid of the process to check (in /var/run/$name.pid )
- +/-[0-9]: to add a nice level. the nice level can also be set in the config file, with the NICELEVEL variable
Integration in a package
In order to integrate your services in a package, you should be sure of the following points:
File naming
The filename should not contain a dot, as chkconfig will not take it into account. And, it should have the same name as the package, in lower case. The file should be in the directory /etc/rc.d/init.d/, which is linked by /etc/init.d. An rpm macro exists for this path, %_initrddir.
/home/misc $ rpm --eval %_initrddir /etc/rc.d/init.d
Files permissions
The script should be runnable by root, and therefore, the permissions should be rwxr-xr-x. Msec resets the permissions once installed.
Macro in %postun and %post
In order to register your service, you need to place 2 macros, one in the %preun , one in the %post section :
# service_name is the name of the script %post %_post_service service_name %preun %_preun_service service_name
See the scripts /usr/share/rpm-helper/add-service and del-service. They take care of adding it to the boot sequence, and stopping or restarting the service if needed.
If the service should not be started by default because it misses some configuration, add the logic for checking in the initscript, and provide documentation regarding the setup to finish.
RPM requirements
As the previous macro use rpm-helper script, you need to be sure that rpm-helper is installed before the current rpm. All you need is to add this tag to the spec:
Requires(pre): rpm-helper
Rpmlint errors
rpmlint uses a dedicated module to check the initscript. You can find it in the rpmlint package, file InitScriptCheck.py.
LSB Compliance
The Linux Standard Base proposes the following specs:
Localisation issues
Mandriva uses gprintf, which calls gettext with the localisation domain of "initscripts" to get the localisation of the first string passed to gprintf, replacing macros (such as %s ) with the remaining arguments. To reduce the load for localisers, it is best to ensure that gprintf calls keep the first string constant, so use:
gprintf "Starting %s: " $DAEMON_NAME
instead of:
gprintf "Starting $DAEMON_NAME: "
This allows just one translation to be maintained for each similar call. White space can have an effect too ... so be careful to ensure the string is kept exactly the same.
Note that the usage of "echo" calls will be replaced with gprintf calls (by /usr/share/spec-helper/gprintify.py which is called from /usr/share/spec-helper/spec-helper which is called from /usr/lib/rpm/brp-mandrake ). Sometimes, gprintify can mess up your good use of gprintf ... in which case you should export DONT_GPRINTIFY in the %install section of your spec file to prevent gprintify.py from being called.
Initscript Header format
As of Mandriva Linux 2007, initscripts should be described using LSB headers instead of chkconfig headers. Adding LSB headers will allow having a robust services dependency check, and to provide a base for parallelized initialization, so packagers are all encouraged to add LSB headers to their packages.
Migration to LSB headers
Let's take the previous dm init script as an example.
It contained the following comments: # chkconfig: 5 30 09 # description: This startup script launches the graphical display manager.
LSB comments
We can add LSB headers in a block delimited by the following lines:
### BEGIN INIT INFO ### END INIT INFO
Facility provides
Each initscript should provide a facility name. The services should be named preferably using this policy:
Facilities that begin with a $ sign are reserved system facilities, such as $network. A complete list is available here:
# Provides: dm
Start dependencies
The services required by an initscript should be described using the Required-Start (mandatory service) or Should-Start (optional service) tags.
# Required-Start: xfs # Should-Start: $network harddrake
Required services
Required-Start means that the listed services must be available for this service. The initscript system will make sure that they are ( chkconfig will enforce the dependency).
Optional services
Should-Start means that the listed services should be available for this service if possible. If an optional service is enabled in this runlevel, it will be started before. If it is not enabled, its start will not be enforced by the initscript system.
Stop dependencies
If another service has to be available during stop, the mandatory dependency should be described using a Required-Stop tag:
# Required-Stop: xfs
The same is available for optional dependencies, using the Should-Stop flag.
Runlevels
You can specify which runlevels the service should be started in using the Default-Start tag.
# Default-Start: 5
Descriptions
Descriptions have to be provided using the Short-Description and Description (potentially multi-line) tags.
# Short-Description: Launches the graphical display manager # Description: This startup script launches the graphical display # manager.
Final result
The dm initscript will finally end up with the following LSB header:
### BEGIN INIT INFO # Provides: dm # Required-Start: xfs # Required-Stop: xfs # Should-Start: $network harddrake # Default-Start: 5 # Short-Description: Launches the graphical display manager # Description: This startup script launches the graphical display manager. ### END INIT INFO
Interactive initscripts
Some initscripts request some user input, such as harddrake when a new device is found. Since this will be quite Mandriva specific, we will use the X-Mandriva-Interactive tag (the LSB asks for an X-implementor-extension format).
# X-Mandriva-Interactive
Mandriva should probably request a vendor tag from the Linux Assigned Names And Numbers Authority, see:
References
For a more complete reference about the LSB headers, please see: | http://wiki.mandriva.com/en/Working_with_initscripts | crawl-002 | refinedweb | 1,760 | 54.12 |
Introduction.
What is Recursion?
As stated in the introduction, recursion involves a process calling itself in the definition. A recursive function generally has two components:
- The base case which is a condition that determines when the recursive function should stop
- The call to itself
Let's take a look at a small example to demonstrate both components:
# Assume that remaining is a positive integer def hi_recursive(remaining): # The base case if remaining == 0: return print('hi') # Call to function, with a reduced remaining count hi_recursive(remaining - 1)
The base case for us is if the
remaining variable is equal to
0 i.e. how many remaining "hi" strings we must print. The function simply returns.
After the print statement, we call
hi_recursive again but with a reduced remaining value. This is important! If we do not decrease the value of
remaining the function will run indefinitely. Generally, when a recursive function calls itself the parameters are changed to be closer to the base case.
Let's visualize how it works when we call
hi_recursive(3):
After the function prints 'hi', it calls itself with a lower value for
remaining until it reaches
0. At zero, the function returns to where it was called in
hi_recursive(1), which returns to where it was called in
hi_recursive(2) and that ultimately returns to where it was called in
hi_recursive(3).
Why not use a Loop?
All traversal can be handled with loops. Even so, some problems are often easier solved with recursion rather than iteration. A common use case for recursion is tree traversal:
Traversing through nodes and leaves of a tree is usually easier to think about when using recursion. Even though loops and recursion both traverse the tree, they have different purposes – loops are meant to repeat a task whereas recursion is meant to break down a large task into smaller tasks.
Recursion with trees for example fork well because we can process the entire tree by processing smaller parts of the tree individually.
Examples
The best way to get comfortable with recursion, or any programming concept, is to practice it. Creating recursive functions are straightforward: be sure to include your base case and call the function such that it gets closer to the base case.
Sum of a List
Python includes a
sum function for lists. The default Python implementation, CPython, uses an indefinite for-loop in C to create those functions (source code here for those interested). Let's see how to do it with recursion:
def sum_recursive(nums): if len(nums) == 0: return 0 last_num = nums.pop() return last_num + sum_recursive(nums)
The base case is the empty list - the best
sum for that is
0. Once we handle our base case, we remove the last item of the list. We finally call the
sum_recursive function with the reduced list, and we add the number we pulled out into the total.
In a Python interpreter
sum([10, 5, 2]) and
sum_recursive([10, 5, 2]) should both give you
17.
Factorial Numbers
You may recall that a factorial of a positive integer, is the product of all integers preceding it. The following example would make it clearer:
5! = 5 x 4 x 3 x 2 x 1 = 120
The exclamation mark denotes a factorial, and we see that we multiply
5 by the product of all the integers from
4 till
1. What if someone enters
0? It's widely understood and proven that
0! = 1. Now let's create a function like below:
def factorial(n): if n == 0 or n == 1: return 1 return n * factorial(n - 1)
We cater for the cases where
1 or
0 is entered, and otherwise we multiply the current number by the factorial of the number decreased by
1.
A simple verification in your Python interpreter would show that
factorial(5) gives you
120.
Fibonacci Sequence
A Fibonacci sequence is one where each number is the sum of the proceeding two numbers. This sequence makes an assumption that Fibonacci numbers for 0 and 1 are also 0 and 1. The Fibonacci equivalent for 2 would therefore be 1.
Let's see the sequence and their corresponding natural numbers:
Integers: 0, 1, 2, 3, 4, 5, 6, 7 Fibonacci: 0, 1, 1, 2, 3, 5, 8, 13
We can easily code a function in Python to determine the fibonacci equivalent for any positive integer using recursion:
def fibonacci(n): if n == 0: return 0 if n == 1: return 1 return fibonacci(n - 1) + fibonacci(n - 2)
You can verify it works as expected by checking that
fibonacci(6) equals to
8.
Now I'd like you to consider another implementation of this function that uses a for loop:
def fibonacci_iterative(n): if n <= 1: return n a = 0 b = 1 for i in range(n): temp = a a = b b = b + temp return a
If the integer is less than or equal to 1, then return it. Now that our base case is handled. We continuously add the first number with the second one by storing the first number in a
temp variable before we update it.
The output is the same as the first
fibonacci() function. This version is faster than the recursive one, as Python implementations are not optimized for recursion but excels with imperative programming. The solution however, is not as easily readable as our first attempt. There lies one of recursion's greatest strengths: elegance. Some programming solutions are most naturally solved using recursion.
Conclusion
Recursion allows us to break a large task down to smaller tasks by repeatedly calling itself. A recursive function requires a base case to stop execution, and the call to itself which gradually leads to the function to the base case. It's commonly used in trees, but other functions can be written with recursion to provide elegant solutions. | https://stackabuse.com/understanding-recursive-functions-with-python/ | CC-MAIN-2019-43 | refinedweb | 976 | 60.04 |
# React Code Splitting in 2019
It's 2019! Everybody thinks they know code splitting. So - let's double check!

What does code splitting stand for?
-----------------------------------
In short – code splitting is just about not loading a whole thing. Then you are reading this page you don't have to load a whole site. When you are selecting a single row from a database – you don't have to take all.
Obvious? Code splitting is also quite obvious, just not about your data, but your code.
Who(What?) is making code splitting?
------------------------------------
`React.lazy`? No – it only uses it. Code splitting is being done on a bundler level – webpack, parcel, or just your file system in case on "native" esm modules. Code splitting is just files, files you can load somewhere "later". So — to the questions "**What is powering code splitting?**" — the answer is — a "bundler".
Who(What) is using code splitting?
----------------------------------
`React.lazy` is using. Just using code splitting of your bundler. Just calling import when got rendered. And that's all.
What's about React-loadable?
----------------------------
`React.lazy` superseded it. And provided more features, like `Suspense` to control loading state. So - use `React.Lazy` instead.
> Yep, that's all. Thank you for reading and have a nice day.
Why article is not finished?
----------------------------
Well. There are a few grey zones about `React.lazy` and code splitting I forgot to mention.
### Grey Zone 1 – testing
It's not easy to test `React.lazy` due to its *asynchroniosity*. It would be just "empty", as long as it is not loaded yet(even if it is) – `Promises` and `import` returns, and lazy accepts, **promises**, which always got executed in the **next tick**.
Proposed solution? You would not believe, but the proposed solution is to use synchronous *thenables* — [See pull request](https://github.com/facebook/react/pull/14626). So — lets make our `imports` SYNCHRONOUS!!! *(to fix lazy issue for the tests, or any other server side case)*
```
const LazyText = lazy(() => ({
then(cb) {
cb({default: Text});
// this is "sync" thenable
},
}));
const root = ReactTestRenderer.create(
}>
// this lazy is not very lazy
,
);
```
It's not hard to convert import function to a memoized synchronous thenable.
```
const syncImport = (importFn) => {
let preloaded = undefined;
const promise = importFn().then(module => preloaded = module);
// ^ "auto" import and "cache" promise
return () => preloaded ? { then: () => preloaded } : promise;
// ^ return sync thenable then possible
}
const lazyImport = isNode ? syncImport : a => a;
// ^ sync for node, async for browser
const LazyComponent = React.lazy(lazyImport(() => import('./file'));
```
### Grey zone 2 – SSR
> If you DON'T need SSR – please continue reading the article!
`React.lazy` is SSR friendly. But it requires `Suspense` to work, and Suspense is **NOT server side friendly**.
There are 2 solutions:
* Replace Suspense with Fragment, via mocking for example. Then, use the altered version of `import` with synchronous `then` to make lazy also behave synchronously.
```
import React from 'react';
const realLazy = React.lazy;
React.lazy = importer => realLazy(syncImport(importer));
React.Suspense = React.Fragment; // :P
// ^ React SSR just got fixed :D
```
This is a good option, but it would be not quite client side friendly. Why? Let's define the 2th possible solution:
* Use a **specialised library** to track used scripts, chunks and styles, and load them on client side (especially styles!) before React hydration. Or else – you would render empty holes instead of your code splitted components. Yet again – you didn't load the code you just splitted, so you can't render anything you are going to.
### Behold code splitting libraries
* [Universal-component](https://www.npmjs.com/package/react-universal-component) – the oldest, and still maintainable library. It "invented" code splitting in terms of – taught Webpack to code split.
* [React-loadable](https://www.npmjs.com/package/react-loadable) – very popular, but an unmaintained library. Made code spitting a popular thing. Issues are closed, so there is no community around.
* [Loadable-components](https://www.npmjs.com/package/@loadable/component) – a feature complete library, it's a pleasure to use, with the most active community around.
* [Imported-component](https://www.npmjs.com/package/react-imported-component) – a single library, not bound to Webpack, ie capable to handle parcel or esm.
* [React-async-component](https://github.com/ctrlplusb/react-async-component) – already dead library(yet popular), which made a significant impact on everything around code splitting, custom React tree traversal and SSR.
* *Another library – there were many libraries, many of which did not survive Webpack evolution or React 16 – I haven't listed them here, but if you know a good candidate – just DM me.*
### Which library to pick?
It's easy – **not react-loadable** – it's heavy unmaintained and obsolete, even if it is still mega popular. (and thank you for popularizing code splitting, yet again)
*Loadable-components* – might be a very good choice. It is very well written, actively maintained and support everything out of the box. Support "full dynamic imports", allowing you to import files depending on the props given, but thus untypable. Supports Suspense, so could replace React.lazy.
*Universal-component* – actually "inventors" of full dynamic imports – they implemented it in Webpack. And many other things at low level – they did it. I would say – this library is a bit hardcore, and a bit less user friendly. Loadable-components documentation is unbeatable. It's worth if not to use this library, then read documentation - there are so many details you should know…
*React-imported-component* – is a bit odd. It's bundler independent, so it would never break (there is nothing to break), would work with Webpack 5 and 55, but that comes with a cost. While previous libraries during SSR would add all the used scripts to the page body, and you will be able to load all the scripts in a parallel – imported don't know files names, and will call the original "imports"(that's why bundle independent) to load used chunks, but able to make call only from inside the main bundle – so all additional scripts would be loaded only after the main one got downloaded and executed. Does not support full dynamic imports, like React.lazy, and, as a result – typeable. Also supports Suspense. Uses synchronous *thenables* on SSR. It also has an absolutely different approach for CSS, and perfect stream rendering support.
There is no difference in quality or popularity between listed libraries, and we are all good friends – so pick by your heart.
Grey zone 3 – hybrid render
---------------------------
SSR is a good thing, but, you know, hard. Small projects might want to have a SSR – there are a lot of reasons to have it – but they might not want to setup and maintain it.
> SSR could be really, REALLY hard. Try razzle or go with Next.js if you want a quick win.
So the easiest my solution for SSR, especially for simple SPA would be prerendering. Like opening your SPA in a browser and hitting "Save" button. Like:
* [React-snap](https://github.com/stereobooster/react-snap) - uses [puppeteer](https://github.com/GoogleChrome/puppeteer)(aka headless Chrome) to render your page in a "browser" and saves a result as a static HTML page.
* [Rendertron](https://github.com/GoogleChrome/rendertron) - which does the same, but in a different (*cloud*) way.
Prerendering is "SSR" without "Server". It's SSR using a Client. Magic! And working out of the box… … … but not for code spitting.
So - you just rendered your page in a browser, saved HTML, and asked to load the same stuff. But Server Side Specific Code (to collect all used chunks) was not used, cos **THERE IS NO SERVER**!

In the previous part, I've pointed to libraries which are bound to webpack in terms of collecting information about used chunks - they could not handle hybrid render at all.
> Loadable-components version 2(incompatible with current version 5), was partially supported by react-snap. Support has gone.
React-imported-component could handle this case, as long as it is not bound to the bundler/side, so there is no difference for SSR or Hybrid, but only for react-snap, as long as it support "state hydration", while rendertron does not.
> This ability of react-imported-component was found during writing this article, it was not known before - see [example](https://github.com/theKashey/react-imported-component/tree/master/examples/hybrid/react-snap). It's quite easy.
And here you have to use another solution, which is just perpendicular to all other libraries.
### React-prerendered-component
This library was created for partial hydration, and could partially rehydrate your app, keeping the rest still de-hydrated. And it works for SSR and Hybrid renderers without any difference.
The idea is simple:
* during SSR - render the component ,wrapped with a
* on the client - find that div, and use innerHTML until Component is ready to replace dead HTML.
* you don't have to load, and wait for a chunk with splitted component before `hydrate` to *NOT render a white hole instead of it* - just use pre-rendered HTML, which is absolutely equal to the one *a real component* would render, and which already exists — it comes with a server(or hydrid) response.
> That's why we have to wait for all the chunks to load before hydrate - to **match** server-rendered HTML. That's why we could use pieces of server-rendered HTML until client is not ready — it is equal to the one we are only going to produce.
```
import {PrerenderedComponent} from 'react-prerendered-component';
const importer = memoizeOne(() => import('./Component'));
// ^ it's very important to keep the "one" promise
const Component = React.lazy(importer);
// or use any other library with ".prefetch" support
// all libraries has it (more or less)
const App = () => (
{/\* ^ shall return the same promise \*/ }
{/\* ^ would be rendered when a component goes "live" \*/ }
);
```
There is [another article about this technology](https://medium.com/@antonkorzunov/react-server-side-code-splitting-made-again-a61f8cbbd64b), you might read. But main here — it solves "Flash Of Unloaded Content" in another, not a common for *code splitting libraries* way. Be open for a new solutions.
### TLDR?
* don't use react-loadable, it would not add any valuable value
* React.lazy is good, but too simple, yet.
* SSR is a hard thing, and you should know it
* Hybrid puppeteer-driven rendering is a thing. Sometimes even harder thing. | https://habr.com/ru/post/444402/ | null | null | 1,722 | 58.48 |
Steve Gillmor informs us of Dave Winer's OPML namepace news:
."
This is good news. All of a sudden, the idea that OPML can become a format for attention data is a step closer to reality. I've been pushing for this so am pleased with Dave's news. (Nick Bradbury and Steve Gillmore have both picked up on my earlier call to restart the conversation around Attention.xml and OPML). One of the sticking points to making this happen was the lack of support for namespaces in OPML.
In reponse to Dave's OPML namespace update tonight, Nick Bradbury proposes we keep things simple to start off."
Nick then invites us to think: "Beyond rank, what other attention data do you think aggregators should collect? And how should they use that data to serve you better?"
Beyond 'rank'? I have some thoughts on this.
The Vote Attribute
Some have been pushing for 'clicks' as an attention data attribute. This clearly has value, but is complex for the resons I give below. There is a simpler way that I think can bring quicker results to market.
I think what could bring more immediate value to us is something along the lines of a 'vote'. It is a word Robert Scoble uses when he talks in the context of blogging. By linking to a post, he his providing a vote for a post. He's adding 'search engine juice'. This is very different to a click. The problem with using clicks as a proxy to attention is that a click doesn't necessarily mean I'm interested. In fact there are many problems with using clicks as an attention data bit (they keep cropping up in conversations I've had). Simple example: It might be that I clicked, and realised the post was not for me. Should this count? The 'solution' to this specific 'click' problem is to measure the amount of time a user spends on a page, and so on and so forth. This click route gets quickly convoluted and as you might see this is anything but a clear path to understanding whether a bit of content is actually of value to the user. And how do you quantify it...what about privacy?) It is do-able and this is what drives recommendation engines at places like Amazon, but at this stage of the Attention game we should gravitate toward the simple where at all possible. Quick wins and all that.
My view is that a simpler, neater, easier and instantly meaningful attribute to implement is a 'vote' type attribute. By vote I mean I like this item, page, post, or podcast, or whatever, so I vote for it. By making a deliberate action in casting a vote against a piece of content, I am positively registering my interest (we could have negative one too, but that's another kettle of fishy things). No fuzziness involved. A vote has explicitly more value than a click. You can think of this being similar to bookmarking (traditional or social along the lines of Del.icio.us).
How do we use this?
So how would this vote attribute be used? In the case of an RSS / feedreader, the votes can be collated, stored and then shared within my OPML attention file. This is what Nick proposing with rank of feeds. You could do this with on local desktop readers such as Nick's FeedDemon, or with webbased readers, such as Bloglines and Windows Live. Nick has more on how this would work, but the idea with rank is that it would be implicit data collected via the use of the application. And this works well at the feed level.. This vote data along with feed rank data would be probably enough data to do some very interesting things. The nice thing about all this attention.xml / attention / OPML discussion is that application developers can compete around the applications' algorithms and other data that turn this attention data into useful things - Attention engines, book and music stores could all allow the import of my attention (OPML file) or point to my OPML attention url to allow more relevant experiences.
Anyway, these are just my initial thoughts given the OPML namepaces news. We're really moving now...All very exciting! Thanks to Dave, again.
Update: Nov 18 2005:
Tags: Web 2.0, OPML, Attention attentiontrust attention.xml
Are you taking the piss alex? didn’t i correct you on gilmoor before? check yourself- two different spellings in two pars. no wonder you’re so happy with sloppy specs…
:-))
oh yeah- the voting stuff. are you aware of the lessig vote tag idea from the last election?
it set off this take on attention before i ever heard of it
please read it, if you didn’t the first time, and let me know what you think
LOL! OK, I’m leaving it there for posterity.
Will check out your post, thanks.
Does Rank suggest a historical rank, number of clicks over time, and if so aren’t you effectively voting by visiting the same resource again and again?
Isn’t including the content (RSS feed, whatever) in the outline element a "vote" for it in the first place? You’re effectively voting for that content, and not voting for content that you don’t put in. What meaning does it have to not vote for something you include? Then you have three states: not included, included and not voted for, and included and voted for. All you’re doing is increasing the number of attention states from two to three. If that’s what you want, then fine. I could see people voting for everything though, so you’re left with effectively two states again and it’s pointless.
I’ll echo Paul’s comment by saying that subscribing to a feed is automatically a vote for it. It might be worthwhile for aggregators to also keep track of feeds you’ve unsubscribed from and flag them with "vote-against" – this way the importing aggregator would know not to recommend feeds you’ve already seen and removed from another aggregator.
BTW, Attention.xml includes a rev/votelink attribute:
Is this more or less what you’re proposing?
manual trackback:
Paul, Nick. You talking at the feed level. As I tried to stress in my post, I’m talking at the item level.
Voting at the item level seems useful, but given how many items we read each day, I’m not convinced enough people would take the time to manually vote for individual items. However, if the action wasn’t an explicit "vote" it might work – for example, if you flag an item in FeedDemon, it could be considered a vote for that item.
Yes, Nick, that would do the trick!!
Glad you agree 🙂 I think in general, most people don’t want to take the time to rank items, so tools such as FeedDemon need to infer ranking (or voting) based on the user’s actions.
That said, explicit voting at the feed level might be practical, since people may be more inclined to say "I like this feed" than they would "I like this post."
PingBack from | https://blogs.msdn.microsoft.com/alexbarn/2005/11/18/opml-and-the-road-to-attention-data-progress/ | CC-MAIN-2019-18 | refinedweb | 1,206 | 73.27 |
We love Google Analytics and want to contribute to its community with this PHP client implementation. It is intended to be used stand-alone or in addition to an existing Javascript library implementation.
It's PHP, but porting it to e.g. Ruby or Python should be easy. Building this library involved weeks of documentation reading, googling and testing - therefore its source code is thorougly well-documented.
The PHP client has nothing todo with the Data Export or Management APIs, although you can of course use them in combination.
Requires PHP 5.3 as namespaces and closures are used. Has no other dependencies and can be used independantly from any framework or whatsoever environment.
The current release is based on version 5.2.5 of the official Javascript client library, see Changelog for details.
A very basic page view tracking example:);
Thanks to Matt Clarke for two great articles:
Google Analytics is a registered trademark of Google Inc.
Made with love by United Prototype in Cologne, Germany. | http://code.google.com/p/php-ga/ | crawl-003 | refinedweb | 167 | 58.99 |
VVV08
From Wiki for iCub and Friends
Welcome to the RobotCub summerschool, July 2008.
for the 2009 school, see this page: VVV09
- Edit these wiki pages with username "vvv08", password on the board.
- Who's Who / Pictures' Page / VVV08 Demos (add photos!)
What about adding some more contact information in Who's Who (e.g. your skype id) so that we can keep in touch once the summer school is over?
- You can reserve a slot of time for using the robot (scheduling).
- Practical information: laundry service 8Kg of clothes to wash and dry in 1 hour for 7 euro (soap is free), click here for directions from hotel mira
- Free time information: A TV series hint: If you want to know what geeks live is like in an IT department: The IT CROWD
- Hints on how to resolve ip address conflict Conflicts.
Contents
- 1 How did you like the school
- 2 Wednesday, Day -1, Last Day, L'ultimo Giorno (day 10 of 10)
- 3 Tuesday Day -2 (day 9 of 10)
- 4 Monday Day -3 (day 8 of 10)
- 5 Sunday Day 7 (free)
- 6 Saturday Day 6
- 7 Friday Day 5 (Other free+open robotics projects)
- 8 Thursday Day 4 (Group work)
- 9 Wednesday Day 3 (forming groups)
- 10 Tuesday Day 2 (getting introduced)
- 11 Monday Day 1 (getting installed)
- 12 Links
How did you like the school
Wednesday, Day -1, Last Day, L'ultimo Giorno (day 10 of 10)
- Thanks guys!
- Dinner at the vis-a-vis. List of people registered for dinner. Time: 8.30 pm.
- Today we'll be finishing a little earlier, at 6pm.
- VVV08 Demos: add photos/links/screen-captures...
- VVV08 Wish List
- Yarp device tutorial coming, ready or not. Scheduled for after coffee.
- Text of tutorial: Making a new device in YARP
- Breakout meeting to plan some details of "icub-0.2", the second official release of icub software (the first official release hasn't happened yet, but we won't let that stop us. Scheduled for after lunch.
- After the school:
- We invite anyone who hasn't already done so to get on the Robotcub-hackers mailing list if you are interested in following (or contributing to) progress in the iCub software and YARP. People at the school have also mentioned Orocos, KDL, and Player, which have their own mailing lists.
- Think about the role that free software has played at the school and your own research.
- Think about the places during the school or in your research where you've had to use "tricks" or simplifications because there isn't time to implement something.
- Think about what parts of your own software you could tidy up and release under a free license so that people won't need to do tricks for that part. There's usually no explicit reward for doing this in academia, but you'll feel good, there is a good chance your papers will get cited more, and people you never met will do more interesting research because of you.
Tuesday Day -2 (day 9 of 10)
- Festival in Windows i.e. text-to-speech for the iCub.
- Sky-watching is on again tonight; there may be a clearer sky.
- No more verbal group updates; instead we are encouraging casual demos. Got something cool working on the robot? Call people over to see. Done something neat on the simulator? Hook it up to the projector and give us a look.
- Please post photos of demos (or screencaps of simulator) on VVV08 Demos.
Monday Day -3 (day 8 of 10)
- Welcome back to the school after a relaxing week end. We hope you had fun!
- The sky-watching night is confirmed for tonight (there is also the possibility to repeat it tomorrow). More details later in the day. We might get some wine and dessert. Time: 10pm. Please sign up here as soon as possible. Dinner as been moved to 8pm. The telescope will be ready 9.30 on the terrace just outside the summer school main room (same floor).
- This week: anyone who hasn't used the real robot is strongly encouraged to - sign up on the RobotScheduling page
- This week: try to create integrated robot behaviors, for examples see Experimental Investigation 1
- We'll check in with the groups again:
- Meeting: simulator v. reality
- Joint work between Visual attention and Grasping groups
- Some updates to VVV08 Wish List
Sunday Day 7 (free)
- The free day. But what to do? Here are possible trip targets as well as a trip idea where you can sign in.
- Did you enjoy it?
Saturday Day 6
- Luis is now Jonas. Welcome Jonas!
- Today we plan to finish earlier, by 4pm
- How about a friendly game of *StarCraft* today before dinner? Sign up here:StarCraft
- Federico will do a Orocos-KDL beginners tutorial at 2:00pm (from a beginner point of view) KDL-simple
Friday Day 5 (Other free+open robotics projects)
- Today and tomorrow we have people from Mascha Film Munich filming a documentary about robotics for international television and german cinema (the film will be released in 2009).
- Important information for Sat and Sun dinners. Please read and sign yourself up here Dinner information!
- VVV08 Wish List started, add YARP/iCub/simulator/... features you'd like.
- Talks today:
- Alexis told us about Player. Here are the slides of the very chaotic talk Media:Psg_talk.zip ;).
- Ruben told us about Orocos, link to the presentation, link to the challenging two robot application,
- We'll be looking for another group update today, sometime after lunch. Pick someone to talk. Should not be the same person as yesterday (unless you are a group of one).
- Future talks:
- Federico will talk sometime in the coming days about KDL from a new user's perspective.
- Is there a talk you'd like to give or hear? Tell Paul/Lorenzo/...
- Group updates:
Thursday Day 4 (Group work)
- Coming up sport events. This evening: volleyball.
- Each group should maintain a page saying briefly what they are doing and giving links etc.
- VVV08 Grasping Group
- VVV08 Visual Attentionators
- VVV08 Pointing People
- VVV08 Yarpers (a virtual group, so that learning about YARP doesn't stop people being in other groups)
- Serge - Multiple iCub person :-)
- VVV08 Interaction Histories
- VVV08 Drumming
- We'll be asking for a quick update from each group during the day.
Wednesday Day 3 (forming groups)
- Access to the robots
- robotcub.org will be down for some time today.
- Vadim has updated the simulator, with fixes for getting images from the virtual cameras in the robot's eyes.
- Ideas for groups? You can organize yourself with others, or if you don't have a specific idea you can follow the following task list:
- Task A: The Kibitzer
- Task B: The Controller
- Task C: The Coordinator
Enigmatic message preserved for historical analysis by posterity:
Last news! :D Hi to every guy! Would you like to play football (friendly) at the beach tis evening after dinner? Let us know! :P Oh! Rectification! Some guys talks about before dinner... Mmm.... I feel confused... Gazed and confused... :| -- (4.58 PM) A piece of new just arrived: 7.00 in front of the "Mira" hotel! We hope to see you in a huge number! :D
Tuesday Day 2 (getting introduced)
- Controlling the simulator2 - another way to control the simulator.
- People introduced themselves, presentations here: VVV08 Participants (wiki code examples from previous years: previous years)
- Some YARP Tutorials
- An example on how to receive/send commands from/to the robot or simulator (zip archive) - this is the code Lorenzo showed yesterday (small fix from Katrin)
- ICubForwardKinematics - the details of the robot joints and measurements.
- ICub joints - names of ports, and indices for motors for the robot (the simulator is the same, with "/icub" replaced by "/icubSim").
- Using a shared simulator
Monday Day 1 (getting installed)
Getting started. As in the good tradition of the VVV summer schools, we got our daily power shortage. We will spend the first day getting yarped!
- How to get Yarp: see here
- How to get the iCub software: click here
Introduction: Giorgio's presentation on the iCub.
Namespaces
- YARP Namespaces are a way of running multiple non-conflicting name servers on a network
- Use Namespaces to run your own name server avoiding any conflicts with other participants. e.g.
yarp namespace /yourname yarp server yarp check
The Simulator
- Simulator README, for compiling and using the simulator | http://wiki.icub.org/index.php?title=VVV08&oldid=4842 | CC-MAIN-2019-18 | refinedweb | 1,400 | 72.76 |
From Item 42 you know that modules are packages that follow certain conventions. There is enough nitpicky detail in these conventions to make it difficult to write a module on your own without some help. This is particularly true if you are planning to package and release a module for public use. Fortunately, help is available in the form of a utility called h2xs .
The h2xs program was originally designed to simplify the process of writing an XS module (see Item 47), but has long since been used for the broader purpose of providing a starting point for writing all Perl modules, XS or not. You should always use h2xs to create the boilerplate for a new Perl module. This will save you and your module's potential users many hours of grief , confusion, and frustration. Trust me on this one.
The easiest way to get started with h2xs is to try it for yourself. Let's look at an example.
Let's use h2xs to create a skeleton for a new module called File::Cmp , then flesh it out so that it works. File::Cmp will contain a function that will compare the contents of two files and return a value indicating whether they are identical. (There already is a File::Compare module that does the same thing, but, remember, this is just an example.)
The first step is to run h2xs . This will create some directories and files, so be sure to execute the command in a "scratch" directory in which it's okay for you to work. We will use the -A , -X , and -n options. The -A option tells h2xs not to generate any code for function autoloading. The -X option tells h2xs that this is an ordinary module and that no XS skeleton will be needed (see Item 47). The -n option supplies the module name . This is the usual combination of options for starting work on a "plain old module":
Begin work on a module by running h2xs .
% h2xs -A -X -n File::Cmp Writing File/Cmp/Cmp.pm Writing File/Cmp/Makefile.PL Writing File/Cmp/test.pl Writing File/Cmp/Changes Writing File/Cmp/MANIFEST
No XS or autoloading.
h2xs creates some files.
At this point you already have a "working" module that does nothing. You could build, test, and install it just as if you had downloaded it from the CPAN (see Item 41). Before we do that, however, let's add some code so that it actually does something useful. Let's start with the file Cmp.pm , which contains the new module's Perl source code. It should look something like the followingminus the italicized annotations, of course:
The file Cmp.pm
Begin by setting the default package to File::Cmp , turning on strict , and declaring a few package variables .
package File::Cmp; use strict; use vars qw($VERSION @ISA @EXPORT @EXPORT_OK);
Use the Perl library's Exporter module. We can require rather than use it because this module will be use -d.
require Exporter;
Subclass Exporter and AutoLoader . We're not actually using AutoLoader in this example, so you can delete any references to it if you like. For more about subclassing and inheritance in Perl, see Item 50.
@ISA = qw(Exporter AutoLoader);
Here's a place to add the names of functions and other package variables that we want to export by default. When we use this module, Exporter will import these names into the calling package. You also can add names to an array called @EXPORT_OK . Exporter will allow those names to be exported on request.
# Items to export into callers namespace by default. Do not export # names by default without a very good reason. Use EXPORT_OK instead. # Do not simply export all your public functions/methods/constants. @EXPORT = qw( );
A version number that you should increment every time you generate a new release of the module.
$VERSION = '0.01';
Insert function(s) after the next line.
# Preloaded methods go here.
We're not autoloading anything, so ignore this.
# Autoload methods go after =cut, and are processed by the autosplit program.
Modules must return a true value to load properly (see Item 54 ). Make sure you don't get rid of this all-important 1 .
1; __END__
The stub for the built-in POD documentation (see Item 46 ) follows the __END__ of the source code.
# Below is the stub of documentation for your module. You better edit it! =head1 NAME File::Cmp - Perl extension for blah blah blah =head1 SYNOPSIS use File::Cmp; blah blah blah =head1 DESCRIPTION Stub documentation for File::Cmp was created by h2xs. It looks like the author of the extension was negligent enough to leave the stub unedited. Blah blah blah. =head1 AUTHOR A. U. Thor, a.u.thor@a.galaxy.far.far.away =head1 SEE ALSO perl(1). =cut
Let's add a function called cmp_file . It will compare the contents of two files, then return if they are identical, a positive number if they are different, and -1 if some sort of error has occurred. Insert the following code after the line that says Preloaded methods go here :
cmp_file : Compare contents of two files
sub cmp_file { my ($file1, $file2) = @_; local(*FH1, *FH2);
This subroutine takes two filenames as arguments.
return -1 if !-e $file1 or !-e $file2; return 0 if $file1 eq $file2;
See if files exist.
Same filenames = same contents.
open FH1, $file1 or return -1; open FH2, $file2 or close(FH1), return -1; return 1 if -s FH1 != -s FH2;
Open files.
Different sizes = different contents.
my $chunk = 4096; my ($bytes, $buf1, $buf2, $diff); while ($bytes = sysread FH1, $buf1, $chunk) { sysread FH2, $buf2, $chunk; $diff++, last if $buf1 ne $buf2; }
We will read one "chunk" at a time.
Read a chunk from each file and compare as strings.
close FH1; close FH2; $diff; }
close files, return status
We will want to export cmp_file from this module automatically (see Item 42). Just add it to the @EXPORT list:
@EXPORT = qw( cmp_file );
Now we need a test script. Open the file test.pl and add the following at the end:
Test script for File::Cmp
This script creates three files containing some random data. Two files are identical and the third is different. We start by creating the data:
srand(); for ($i = 0; $i < 10000; $i++) { $test_blob .= pack 'S', rand 0xffff; } $test_num = 2;
Do the testing inside an eval block (see Item 54 ) to make error handling easier.
eval { open F, '>xx' or die "couldn't create: $!"; print F $test_blob; open F, '>xxcopy' or die "couldn't create: $!"; print F $test_blob; open F, '>xxshort' or die "couldn't create: $!"; print F substr $test_blob, 0, 19999;
The test files have been created. Now, use cmp_file to compare them.
if (cmp_file('xx', 'xxcopy') == 0) { print "ok ", $test_num++, "\n"; } else { print "NOT ok ", $test_num++, "\n"; } if (cmp_file('xx', 'xxshort') > 0) { print "ok ", $test_num++, "\n"; } else { print "NOT ok ", $test_num++, "\n"; } };
Report any exceptions from the eval block, tidy up, and we're done.
if ($@) { print "... error: $@\n"; } unlink glob 'xx*';
You also should change the line near the top of test.pl so that the count of tests reads correctly:
BEGIN { $ = 1; print "1..3\n"; }
At this point, we can build and test the module (see Item 41):
% perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for File::Cmp % make test cp Cmp.pm ./blib/lib/File/Cmp.pm AutoSplitting File::Cmp (./blib/lib/auto/File/Cmp) PERL_DL_NONLAZY=1 /usr/local/bin/perl -I./blib/arch -I./blib/lib -I/usr/local/lib/perl5/sun4-solaris/5.003 -I/usr/local/lib/perl5 test.pl 1..3 ok 1 ok 2 ok 3
Cool.
There are still some things to be done. You will need to replace the documentation stub in Cmp.pm with something more informative (see Item 46). You also should add a description of the work you did to the log in Changes . You could add some more thorough tests to test.pl .
Once you have whipped the module into shape, you can prepare a distribution. Just make the tardist target:
% make tardist rm -rf File-Cmp-0.01 /usr/local/bin/perl -I/usr/local/lib/perl5/sun4-solaris/5.003 -I/usr/local/lib/perl5 -MExtUtils::Manifest=manicopy,maniread \ -e 'manicopy(maniread(),"File-Cmp-0.01", "best");' mkdir File-Cmp-0.01 tar cvf File-Cmp-0.01.tar File-Cmp-0.01 File-Cmp-0.01/ File-Cmp-0.01/Makefile.PL File-Cmp-0.01/Changes File-Cmp-0.01/test.pl File-Cmp-0.01/Cmp.pm File-Cmp-0.01/MANIFEST rm -rf File-Cmp-0.01 compress File-Cmp-0.01.tar
Voila! You now have a file called File-Cmp-0.01.tar.Z , which contains the source to your module. This file follows the conventions of the CPAN and is ready for distribution to the world.
If you have written a useful module, consider sharing it with the rest of the worldsee Item 48. | https://flylib.com/books/en/1.93.1.69/1/ | CC-MAIN-2018-39 | refinedweb | 1,511 | 76.22 |
Some projects require authentication features that involve some quite intricate steps. But fret not, in JMeter we can use Groovy to do the heavy lifting. Below is a very simple example of how you can do a HMAC encryption. It also includes the SHA256 hashing and base64 encoding. The only thing missing are the functions to read the variables from JMeter and publish the hash to JMeter but that is trivial and you can fit it into whatever you already have scripted.
import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; import java.security.InvalidKeyException; String secretKey = "secret"; String data = "Message"; Mac mac = Mac.getInstance("HmacSHA256"); SecretKeySpec secretKeySpec = new SecretKeySpec(secretKey.getBytes(), "HmacSHA256"); mac.init(secretKeySpec); byte[] digest = mac.doFinal(data.getBytes()); encodedData = digest.encodeBase64().toString(); log.info("HMAC SHA256 base64: " + encodedData);
by Oliver Erlewein
Thanks for the cool solution. I would recommend to remove mentioning of Beanshell as
1. It doesn’t provide possibility to directly perform Base64 encoding on the byte array
2. It has some performance problems
Also JMeter comes with Apache’s commons-codec library so HmacUtils class could be used.
Hi,
Yes I din’t test on BeanShell. Thanks for pointing that out. Is now removed.
As for performance…yeah…just better to use Groovy.
Cheers | https://hellotestworld.com/2015/12/01/hmac-and-sha256-in-jmeter/ | CC-MAIN-2022-05 | refinedweb | 211 | 53.58 |
this exercise was from the book "Java how to program 9th edition".
you ask the user to input 10 numbers and your program should find the largest number and the second largest number out of the 10. the logic in my code makes sense to me but it does not come up with the two largest numbers. :mad:
Code Java:
import java.util.Scanner; public class Largest { public static void main(String [] args){ Scanner input = new Scanner(System.in); int counter = 0; int currentNum; int largestNum = 0; int secLargestNum = 0; while (counter < 10){ currentNum = input.nextInt(); if (currentNum > secLargestNum && currentNum > largestNum){ largestNum = currentNum; if (currentNum > secLargestNum && currentNum < largestNum){ secLargestNum = currentNum; } } counter++; System.out.printf("%d\n", largestNum); System.out.printf("%d\n", secLargestNum); } } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/13133-find-two-largest-number-printingthethread.html | CC-MAIN-2013-48 | refinedweb | 123 | 57.67 |
Why java has 2 ways to create thread ?
Prabhat Ranjan
Ranch Hand
Posts: 397
Why java has 2 ways to create Threads.
1) Thread Class
2) Runnable INTERFACE
i) Thread class won't allow you to extend more class while interface allows you
ii) Runnable has only one method need to implement mandatory while Thread has methods other than Run also.
What else benefits we have ?
Regards,
Prabhat
Java has only 1 way to create a thread. With the thread class (or a child of it). However it can be used in combination with the runnable interface.
It's just an abstraction of the code to be executed. It's generally discouraged to extend the Thread class. The Thread class has a lot of overhead
and the Runnable interface doesn't.
It's just an abstraction of the code to be executed. It's generally discouraged to extend the Thread class. The Thread class has a lot of overhead
and the Runnable interface doesn't.
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler
Please correct my English.
Prabhat Ranjan
Ranch Hand
Posts: 397
I have few comments over this point:
1) Interface Runnable have only 1 method which you mandatory to implement.
2) If you need other methods like suspend() resume() sleep() join() yield() and stop() then
go for extending class Thread
3) Extending the Thread class will make your class unable to extend other classes, because of the single inheritence feature in JAVA.
4) If you want to execute run() method for multiple no of times then its better to you Runnable.
public class testing implements Runnable {
public void run() {
System.out.println("Hello Run --->");
}
public static void main(String args[]){
testing testing = new testing();
Thread thd = new Thread(testing);
thd.run();
thd.run();
}
}
While Thread class doesn't allow yo to call the start() methods more than once.
will throw IllegalThreadStateException.
5) Thread Class actually implements Runnable interface internally.
1
Prabhat Ranjan wrote:2) If you need other methods like suspend() resume() sleep() join() yield() and stop() then go for extending class Thread
Please don't. suspend(), resume() and stop() are deprecated and should not be used. sleep() and yield() are static and cannot be overridden. join() is final and cannot be overridden.
By just instantiating a new Thread object you can make use of join() without ever needing to extend it. You don't need to extend a class to be able to call its public methods.
5) Thread Class actually implements Runnable interface internally.
Not just internally, also externally
The implements clause is not an implementation detail but part of its API. But yes, it does implement Runnable.
Prabhat Ranjan
Ranch Hand
Posts: 397
Prabhat Ranjan wrote:About point 2 , i have clarification that if methods are deprecated means we cann't use in real world programming.
Well, you can. You just shouldn't. They are deprecated for a reason; either they can cause a lot of problems (like Thread.resume(), Thread.suspend(), Thread.stop()) or there are better alternatives.
What else benefits on using Runnable over Thread ?
Aren't points 3 and 4 enough? No? Then how about sharing one Runnable instance among multiple Threads at the same time?
I think this point has already been made, but to reiterate: According to Cay S. Horstmann and Gary Cornell in Core Java 2: Volume II - Advanced Features...
...forming a subclass of the Thread class ... is no longer recommended. You should decouple the task that is to be run in parallel from the mechanism of running it.
Matthew Brown
Bartender
Posts: 4568
9
I'd look at it this way. What is a Thread? It represents a process running on the machine, that can execute some actions. What is a Runnable? It represents an action to execute. These are really two completely different concepts - so general object-oriented principles suggest that using two different objects is likely to be preferable.
| https://coderanch.com/t/478528/java/java-ways-create-thread | CC-MAIN-2016-44 | refinedweb | 665 | 67.45 |
Definition of Spring Boot Tomcat
It is the most popular servlet container which was used to deploy java applications, by default spring boot is built the standalone application which was running on the desktop. After installing Tomcat as a service it will manage multiple applications within a specified application, it will be avoiding the need to set up a server for each application. Spring boot is used the main method which was used to launch the endpoint of embedded server, if we are using a maven build it will create a jar file, it contains all dependencies related to the project. We can deploy our spring boot application on the tomcat server.
What is Spring Boot Tomcat?
- After implementing the spring boot application jar file, we can deploy the same on the apache tomcat server.
- Spring boot is building their application on top of the spring and it will serve when spring is serving.
- To deploy the spring application on the apache tomcat server will involve multiple steps which were we need to be configured.
- To deploy the spring boot application on the apache tomcat server first we need to install the apache server in our server.
- At the time of running the spring boot application, spring boot will be detecting that we have an MVC spring controller and need to start an apache tomcat instance by default.
- At the time of using the tomcat server in our spring boot application, we need to change any configuration settings, first, we need to enable the HTTPS for our web service which was we have used in our application.
- Spring boot web application is running on HTTPs and it is requiring the SSL/TLS certificate and also require the own web server.
- The embedded servlet container customizer will configure most of the explicit power of explicit XML configuration for a standalone instance of apache tomcat.
- We need to define which port we have used for the server, and also need to configure the properties through the command line or by using a loaded property file.
- To change the apache tomcat server port from which it will listen we need to specify the server port if we specify the server port as zero, if suppose we have defined server port as zero it will automatically find out unused port and assign it to the server.
- By default, the spring boot application is using the version of tomcat as 7. If we need to use a higher version of tomcat, then we need to override the maven build property it will trigger the build resolution.
Spring Boot Application into Tomcat
- To deploy the spring boot application on the apache tomcat server it will include the below three steps are as follows.
1) First step is to set up a spring boot application.
2) After creating the spring boot application war file of spring boot.
3) After creating war deploy the same on the tomcat server.
- The first step of spring boot apache tomcat deployment is to create a spring boot application. In this step, we are creating a new application for spring boot.
- The second step is to create the war file of the application which was we have developed; we have to create a war file using a maven build.
- The third step is deploying the application war file on the tomcat server. We need to deploy the same into the webapps folder or directory.
Spring Boot Project in tomcat
The below example shows to create an application:
1) Create project template using spring initializer –
Group – com.example
Artifact name – spring-boot-tomcat
Name – spring-boot- tomcat
Description - Project of spring-boot- tomcat
Package name - com.example.spring-boot- tomcat
Packaging – Jar
Java – 11
Dependencies – spring web.
2) After generating project extract files and open this project by using spring tool suite –
3) After opening project using spring tool suite check the project and its files –
4) Add spring dependency –
Code:
<dependency> -- Start of dependency tag.
<groupId>org.springframework.boot</groupId> -- Start and end of groupId tag.
<artifactId>spring-boot-starter-web</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
5) Create controller class for application –
Code:
@RestController
public class TomcatController
{
@GetMapping ("/apache")
public String apache()
{
return "Apache Tomcat.";
}
}
6) Run spring boot application –
7) Check the output of the application on the browser –
Code:
8) Create a spring boot war file for our project.
9) Add tomcat server dependency –
Code:
<packaging>war</packaging> -- Start and end of packaging tag.
<dependency> -- Start of dependency tag.
<groupId>org.springframework.boot</groupId> -- Start and end of groupId tag.
<artifactId>spring-boot-starter-tomcat</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
<finalName>web-services</finalName> -- Start and end of finalName tag.
10) Create WAR file of the project –
- First, build the project using a maven build then enter the goals name as a clean install and then click on apply and run.
11) Check the build status of the project –
- After running the maven build then we need to check the build status of our application.
12) Check the war file is generated in a specified location –
13) Install tomcat server –
14) Copy the war file and paste the same in the web apps folder of apache tomcat –
15) Open a command prompt and run the startup command –
startup
16) Check war file is deployed successfully or not –
17) After successfully deploying the war file check the application by opening URL –
Code:
Code for Spring Boot Application class
The below code shows the spring boot application class which was we have implemented for deploying applications on the apache tomcat server.
Code:
@SpringBootApplication
public class springbootapache extends SpringBootServletInitializer
{
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application)
{
return application.sources (springbootapache.class);
}
public static void main(String[] args)
{
SpringApplication.run(springbootapache.class, args);
}
}
Conclusion
It is the most popular servlet container which was used to deploy java applications. To deploy the spring boot application on the apache tomcat server first we need to install the apache server in our server. To deploy the spring application on the tomcat server will involve multiple steps.
Recommended Articles
This is a guide to Spring Boot Tomcat. Here we discuss the definition, What is Spring Boot Tomcat? examples with code implementation. You may also have a look at the following articles to learn more – | https://www.educba.com/spring-boot-tomcat/?source=leftnav | CC-MAIN-2022-40 | refinedweb | 1,059 | 51.99 |
In my last column, I demonstrated how to build a container using the container classes available in the standard Java distribution. That column ended with the building of a container, which was tested with a simple test application. What was left unspoken in the last column was a design decision to make the indexing key of our container a String object.
A String object was used because it has the property of being easily sorted. No other standard object in the Java standard libraries is both as flexible and as sortable as a string. We can't go ahead and make all keys in our container a subclass of String because String is final -- it can't be subclassed. Further, some objects work well as their own key, that can save space on the heap and garbage generation, both of which are Good Things. So what do we do? Well, let's see...
Solving the key dilemma
There are three approaches to solving the key dilemma. Each has its drawbacks, so one might conclude that there is no good solution. This is not true. A good universal solution may not exist, but there are widely applicable solutions.
The first approach: To everything Stringness
The first approach takes advantage of the fact that every object defines a
toString method. This method is initially defined in Object but can be overridden by subclasses.
The
toString method was designed so that all objects might have some representation that could be displayed on a character device such as a terminal or window. The number classes provide overridden versions that convert a subclass of
Number to a printable representation. The default implementation in Object gives you the name of the object's class and its reference number (actually its address in memory) as a string. The format of the string is classname@hex-address.
With this knowledge, we can write a helper method --
stringForKey -- and then rewrite
get,
put, and
remove to use this method on the key objects that are passed to them. The next bit of code shows both the helper method and the rewritten
get that implements this solution:
private String stringForKey(Object o) { if (o instanceof String) return (String) o; return o.toString(); } public Object get(Object skey) { String s = stringForkey(skey); return search(rootNode, s).payload; }
In the code above, this version of the method uses the
instanceof operator in Java to test if the key passed is already a string; if it isn't, it uses the
toString method in Object to create a string for the tree. I have included a link to the source code of BinarySearchTree written in this way.
The advantage here is that, by default, all objects have an implementation of
toString, and if this method is implemented in such a way as to uniquely define objects, we could certainly use it. Unfortunately, this solution depends on Java programmers knowing that this function may be used for this purpose. If a naive programmer were to design a class with an override of
toString like that shown below, our search tree would be in big trouble.
public String toString() { return "A Foo Object." }
The method shown above is a perfectly legitimate implementation of
toString according to the Java language specification, and yet it has the annoying side effect of causing objects of this type to be non-sortable. Every instance of this object type will return the same string representation and thus will not be distinguishable in the BinarySearchTree code.>
A second approach: Interfaces
The issues involved in using
toString can be evaluated in The Java Language Specification by James Gosling, Bill Joy, and Guy Steele. Taken from the specification, the contract for
toString is as follows:
"The general contract of
toStringis that it returns a string that 'textually represents' this object. The idea is to provide a concise but informative representation that will be useful to the person reading it."
As you can see, the specification doesn't say anything about being unique across object instances.
To get the guarantee that the programmer will know what we wanted for our keys, we can define an interface to provide a contract where Object did not write into the contract that
toString could be used as a unique key. This way, we can make the contract for that interface clear to its implementers. The canonical example is to define an interface, ours is named SearchKey, that defines the contract for a key used for searching. The code for SearchKey is shown below.
public interface SearchKey { /** * returns < 0 if search key o is less than this object. * 0 if search key o is equal to this object. * > 0 if search key o is greater than this object. */ abstract int compareKey(SearchKey o); /** Return true if passed Search Key matches this one */ abstract boolean equalKey(SearchKey o); }
This interface defines the method
compareKey to have semantics that are equivalent to the
compareTo method in class
String. I avoid reusing the same, or common, method names in the interface definition as Java is unable to distinguish between two methods with the same signature in two separate interfaces. It is poor form to choose method names that might easily collide with other interface definitions. The second method returns a boolean value indicating that two keys are equivalent.
The contract, then, is that any object implementing this interface can be used as an index key to our container. The most common implementation of
equalKey might be similar to the code shown below.
public boolean equalKey(SearchKey o) { return (compareKey(o) == 0); }
The interface explicitly calls out the requirements of its implementation. Therefore, programmers implementing this interface in their objects will understand the underlying contract between the container and its keys. By combining the two methods
equalKey and
compareKey, the invariants of the binary tree search algorithms can be maintained.
Using this interface in our container is a bit more difficult. We cannot change the method signatures of
get,
put, and
remove, as they are constrained by the
Dictionary superclass. However, we can force an error if the client fails to abide by our rules. As in the first approach, we define a private helper method to convert from the Object class reference to a SearchKey reference. This method is called
keyForObject and it is shown in the code below.
private SearchKey keyForObject(Object o) { if (o instanceof SearchKey) { return (SearchKey) o; } if (o instanceof String) { return new StringSearchKey((String) o); } throw new RuntimeException("Dictionary key must implement SearchKey"); }
Again, as in the first approach described in this column, if our object type is not the type we require, we throw an exception; however, we do allow for one luxury, which is to accept objects of class
String directly. Being able to directly accept a string is useful because strings are so often used as index objects in dictionaries.
Using this approach, we can rewrite BinarySearchTree to check that the objects are either strings or search keys. The code below shows how the
put method in BinarySearchTree is rewritten to use SearchKey keys rather than string objects.
public Object put(Object keyObj, Object value) { BSTNode n; Object r = null; SearchKey k = keyForObject(keyObj); n = search(k); if (n != null) { r = n.payload; remove(n); } n = new BSTNode(k, value); insert(n); return r; }
Note that while the interface to
put states that it will accept any object, in fact it will only accept objects that implement SearchKey. Any object that is passed as a key object to the
put method that is neither a string nor implements the SearchKey interface will be rejected. This rejection comes in the form of a run-time exception that is thrown by the
keyForObject method when the conversion from a generic object into a SearchKey type object is attempted. This type of exception should be used only in truly exceptional situations where you will need to abort the Java thread if the exception condition exists.
The rewritten
BinarySearchTree class is here, and this impacts the
BSTNode class as well in that the key type is changed from String to SearchKey. The modified code for
BSTNode is here. Now this looks OK, but in fact it opens up a few new challenges. Before jumping on to the challenges, let's look at the advantages of this technique.
The primary advantage of this technique is that the key function of the dictionary has been generalized into the SearchKey behavior. Thus, to use any object as a key in our binary search tree, we need only insure that the object implement this interface. This allows us to design our payload objects so that they can be both the key and the value. Consider the following redesign of DictionaryTest; we start by designing a class to store in our dictionary that can be both the key and the value. This is called
ColorKey and is shown below. In a real application, the value would probably be a
java.awt.Color object, but for our example we'll leave it as a string.
class ColorKey implements SearchKey { String key; String colorValue; public ColorKey(String k, String v) { key = k; colorValue = v; } private void verifyType(SearchKey k) { if (! (k instanceof ColorKey)) throw new RuntimeException("You cannot mix keys here."); } public int compareKey(SearchKey b) { verifyType(b); return key.compareTo(((ColorKey) b).key); } public boolean equalKey(SearchKey o) { return (compareKey(o) == 0); } public String toString() { return "ColorKey:: '"+key+"' ==> '"+colorValue+"'"; } }
You will notice that the
compareKey method is where all the work is done. This method insures that the object being passed to this method for comparison is in fact a ColorKey object (it could also be a subclass of ColorKey). If the object has the correct type, it is cast to a ColorKey, and the
compareTo method of the instance variable key is used to compare this ColorKey object with the ColorKey object passed into this method.
In the
DictionaryTest class, the initial array of strings was modified to initialize an array of
ColorKeys as shown in the next section of code:
From DictionaryTest.java: ColorKey keys[] = new ColorKey[colors.length]; for (int i = 0; i < colors.length; i++) { keys[i] = new ColorKey(colors[i], (i+1)+": "+colors[i]); d.put(keys[i], keys[i]); }
Of interest here is that when
put is called, the same object is used for both the key and the payload. This eliminates the possibility that a properly constructed node will have the wrong key.
As I mentioned earlier, there are some problems with this approach. The first is that while any object that implements SearchKey can be passed, you can't simply compare two SearchKey objects, as they may have no common basis for comparison. Consider a simple case like IntegerKey, which is shown below.
public class IntegerKey implements SearchKey { int value; public IntegerKey(int a) { value = a; } private void verifyType(SearchKey o) { if (! (o instanceof IntegerKey)) throw new RuntimeException("You cannot mix keys here."); } public int compareKey(SearchKey o) { verifyType(o); return (((IntegerKey) o).value - value); } public boolean equalKey(SearchKey o) { return compareKey(o) == 0; } }
If it weren't for the exception thrown in
verifyType, and if you were to use an IntegerKey object as a key for one of the values in a BinarySearchTree populated with ColorKey keys, you would never be able to fetch the number out. This whole issue of checking the type of the objects used for keys -- and for payloads, for that matter -- brings up a still thornier issue: type integrity. The issue of type integrity involves guaranteeing that the Object references in your container have the same (or compatible) types. This issue is covered in detail in the next section.
More importantly, using an interface in this way means that if you want your object to be contained in this container, and be self keying, you have to modify it to implement the SearchKey interface. For Java classes to which you don't have the source, this is impossible.
Approach three: Using Mixins
Do you remember the first implementation of BinarySearchTree from the previous column? In that version, the keys were always string objects. There is a wonderful crispness to defining the interfaces with hard-coded types; doing so can flush out a host of errors whereby you might pass the wrong variable to your
put method, and thus corrupt your container, only to find out about it later. Also we have been ignoring the fact that the payload is simply an object reference that has its own type. Further, a Dictionary is used typically for storing things in it, which it retrieves later; therefore, there is a lot of type casting going on. If the wrong reference gets stored, you find out about it only when you cast the payload as you take it out. By that time there may be very few clues about how the object arrived there.
To state this a different way, the interface to
get is
"Object get(Object key);". Of course you've been putting a particular type of object into the container, one that you expect to get out on the other side. The typical use of
get in your code will be something like the following:
MyTypeName var = (MyTypeName) dictionary.get(SomeKey);
Let's say you had been storing objects of type MyTypeName in this table but you had a bug on one line of code that accidentally stored in an object of type String. If you execute the above statement with the key object used to store that string, this line will throw a ClassCastException when you least expect it. If you don't catch this bug in testing, it will most certainly surface the day you are demonstrating your product for the new investors. So, to be on the safe side, what you really want isn't a generic container at all but a container for objects of type MyTypeName. That way, if you ever tried to hold an object in the container that was of the wrong type, you would catch the bug when it was stored. We can resolve this issue.
Obviously, most of the code in BinarySearchTree is generic code, poking at the left or right branches of a node, searching up and down, and so on. Stated another way, the places where the code needs to be specific are fairly small. The challenge then is to abstract out the fairly small part, and figure out some way to "mix it back in" when you need it.
The questions and operations for a type-specific implementation are:
- Is the type of key valid?
- Is the type of the value or payload valid?
- Compare these two keys and tell me which is greater.
- Return me an object of my type for this key.
The above specifically relate to the type of object we are storing in the dictionary. Now consider the following abstract class named
ContainerOrganizer. The complete source is here:
public abstract class ContainerOrganizer { Dictionary localContainer; public void setDict(Dictionary d) { ... } public abstract Object keyForObject(Object o); public abstract boolean verifyType(Object obj, boolean ex); public boolean verifyType(Object o) { return verifyType(o, true); } public boolean equalKeys(Object k1, Object k2) { return compareKeys(k1, k2) == 0; } public abstract int compareKeys(Object k1, Object k2); }
Note that the container organizer is similar in many ways to the SearchKey interface described earlier. The abstract bits are
keyForObject, which is responsible for converting the object passed in the key parameter into an object that can actually key the container. If you will recall, our interface example could take either a String or a SearchKey as they were pretty similar. This is where you can mix in such things. The next abstract method is
verifyType, which takes an object that is about to be placed in the payload of our node object and verifies that it is the correct type. This insures type integrity at insertion, so you don't get class casting failures later. This method has the option of throwing a
RuntimeException when it gets an error, which is useful if you can assume that any time a bogus object is passed, it is a serious bug. Finally, there is the abstract method
compareKeys, which, when implemented, will know how the keys you are using wish to be compared.
The impact on the container code, in terms of special-purpose code to integrate the use of a ContainerOrganizer is fairly minimal. The
BSTNode class is updated to store its keys as Objects rather than String or SearchKeys. The type of the key is enforced during object creation and while searching. The
BinarySearchTree code is modified to call
verifyType and
compareKeys in its search methods. The modified code for
put is shown below. If you take a look at the source, you'll notice that the instance variable
co is set to the ContainerOrganizer object passed to the constructor.
public Object put(Object key, Object value) { BSTNode n; Object s; co.verifyType(value); s = co.keyForObject(key); n = search(s); Object r = null; if (n != null) { r = n.payload; remove(s); } n = new BSTNode(s, value); insert(n); return r; }
In the code above,
put first calls
verifyType in the ContainerOrganizer object that has been mixed into this container. If the type of the parameter value is not valid for this container, this call will throw a run-time exception. Next it verifies and possibly translates the key object passed by calling the
keyForObject method. This way, it simulates writing the
put method with the signature
public Object put(MyKeyType key, MyLocalType value) { ... }
which is how we would have written it if we weren't using a generic container. The next area of impact on the container is in the
search method. The implementation of the new version of this method is shown below.
private BSTNode search(BSTNode n, Object searchKey) { if ((n == null) || co.equalKeys(searchKey, n.key)) { return n; } if (co.compareKeys(searchKey, n.key) < 0) { return search(n.left, searchKey); } else { return search(n.right, searchKey); } }
In the code above,
search calls the method
compareKeys in the ContainerOrganizer object in order to establish their relationship. The method can encapsulate an arbitrarily complex mechanism for evaluating the relative value of two keys. Further, the contract for this method is always clear to the implementor.
You should now be able to see how you can write a concrete subclass of the
ContainerOrganizer class, mix it into the generic
BinarySearchTree class, and get back an instance of a customized BinarySearchTree. To help with this visualization I've contrived a simple example based on the classes I've been using throughout this article.
Putting these containers to use
In the first two examples we've used DictionaryTest as a program for storing and then manipulating some values in our dictionary. I'll continue using that example but will change it a little bit to illustrate the use of non-string-based keys. To begin, I've put the colors that are being sorted and stored into a new class called
ColorItem. A color item is defined as follows:
public class ColorItem { private String name; private Color value; float vals[]; public ColorItem(String n, Color v) { name = n; value = v; } public Color toColor() { return value; } public String nameOf() { return name; } float val[] = new float[3]; public String toString() { Color.RGBtoHSB(value.getRed(), value.getGreen(), value.getBlue(), val); return "Color: "+name+", value "+val[0]; } }
The code above shows that this object holds an AWT Color object and a name to go with it. The names may be useful as keys, but for the example let's make the index of interest be the hue of each color. To build a BinarySearchTree that can store colors in sorted hue order, I built a
ContainerOrganizer subclass called ColorOrganizer. This is defined as shown:
public class ColorOrganizer extends ContainerOrganizer { public boolean verifyType(Object o, boolean b) { if (o instanceof ColorItem) return true; if (b) throw new RuntimeException("Invalid Type"); return false; } float HSBValues[] = new float[3]; public int compareKeys(Object k1, Object k2) { Color a = ((ColorItem) k1).toColor(); Color b = ((ColorItem) k2).toColor(); float aHue, bHue; Color.RGBtoHSB(a.getRed(), a.getGreen(), a.getBlue(), HSBValues); aHue = HSBValues[0]; Color.RGBtoHSB(b.getRed(), b.getGreen(), b.getBlue(), HSBValues); bHue = HSBValues[0]; if (aHue < bHue) { return -1; } else if (aHue > bHue) { return 1; } else { String s1 = ((ColorItem) k1).nameOf(); String s2 = ((ColorItem) k2).nameOf(); return (s1.compareTo(s2)); } } public Object keyForObject(Object s) { if (! (s instanceof ColorItem)) { throw new RuntimeException("Invalid key type."); } return s; } public ColorItem fetch(ColorItem key) { if (localContainer == null) return null; return (ColorItem) localContainer.get(key); } }
The concrete implementation of verifyType checks to see that we're using a ColorItem to store into our tree. The method
compareKeys extracts from two ColorItem objects their base colors, which it converts to hue, saturation, and brightness values. Finally, it uses the hue value to compare colors. If the two hues are identical, it then compares their name. This way, your "greenish blue" color and my "bluish green" color are always sorted next to each other, even if we chose alphabetically distinct names for them. The method
keyForObject performs only a type check, as it uses the same type of object for the key as well as for the payload in our nodes. Finally, there is a simple helper method called
fetch with nearly the same signature as the Dictionary
get method. The difference is that the type it returns is the one we're expecting to get out of our container.
This last method takes advantage of the
setDict method defined by ContainerOrganizer. When the BinarySearchTree is instantiated it puts a reference to itself in the organizer. Then instead of writing:
ColorItem c = (ColorItem) dictionary.get(key);
I can avoid typing the cast expression in my code every time I fetch an object from the container by using an implementation of
fetch, as shown below.
ColorItem c = coloritem.fetch(key);
The implementation of fetch is essentially syntactic sugar, as it has to do the cast as well. The
fetch method is defined to return objects of type ColorItem. If you have multiple dictionaries, using the
fetch method from the mixin rather than the
get method to access data, can help you avoid an illegal cast from the wrong dictionary.
Thus the mixin code is probably as close as you can get to writing a custom container for each type of object you are holding, with the least amount of custom code being written. Or alternately stated, this method probably gives you the maximum amount of re-use of the container code that was already written.
Wrapping up
I've looked at three different methods for making the container we designed last month into something a bit more generic. The third technique, mixins, is a useful way to abstract out parts of a class that might otherwise be specified with multiple inheritance. One of the proposed technologies for the 1.1 version of the JDK is the use of anonymous or "inner" classes, which are specified inside another class and have no visibility outside that class. Inner classes make excellent mixins or object adapters because they are truly exposed only to the class that is adapting the adaptable class. This makes for easier-to-read code and more reliable results, as there is little risk of some other class mistakenly using your inner class.
Each of these techniques can be used alone or in combination to increase the reusability of the code you write. It is important to remember that every line of truly reusable code you write will rarely have to be rewritten. It seems that the first forty years of computer science involved writing the same code over and over again. Ada was the first language I had heard about that explicitly stated the goal of having write-once, multiple-use packages. Both Mesa and Modula-2 (and Modula-3) had similar goals. With the platform independence of Java, this goal of reusable parts not only makes a lot of sense but has a tremendous leveraging effect for programmers everywhere.
I hope you've learned something from this column. I learn something every time I write one! Keep those cards and letters coming.
Learn more about this topic
- Source code for this and other In-Depth columns can be found
- The Java Language Specification, by James Gosling, Bill Joy, and Guy Steele, is part of the "Java Series." For a description of the book, see:
- For information on mixins, see:
- Information on the JDK version 1.1 can be found at: | http://www.javaworld.com/article/2077313/learn-java/code-reuse-and-object-oriented-systems.html | CC-MAIN-2014-52 | refinedweb | 4,107 | 61.16 |
On Sun, Sep 16, 2012 at 09:19:17AM +0100, James Bottomley wrote:> On Fri, 2012-09-14 at 14:36 -0400, Aristeu Rozanski wrote:> > also, heard about the desire of having a device namespace instead with> > support for translation ("sda" -> "sdf"). If anyone see immediate use for> > this please let me know.> > That sounds like a really bad idea to me. We've spent ages training> users that the actual sd<x> name of their device doesn't matter and they> should use UUIDs or WWNs instead ... why should they now care inside> containers?True, bad example on my part. The use case I had in mind when I wrote thatcan be solved by symbolic links.-- Aristeu | http://lkml.org/lkml/2012/9/17/227 | CC-MAIN-2013-20 | refinedweb | 118 | 81.02 |
Ctrl+P
Serenji, from George James Software, is an extension to debug ObjectScript in a simple and straight-forward way so you can focus on producing high quality code.
Since it was launched over 20 years ago Serenji has become a staple tool for all InterSystems developers. Following its success, we re-engineered it as a fully integrated VS Code extension to provide seamless integration with namespaces in your InterSystems environments.
Debug in just one click with zero configuration.
Run ObjectScript code directly without debugging.
Execute one command at a time, stepping in, out or over each statement.
Set breakpoints and watchpoints dynamically.
View and modify variables at each stack level.
Direct navigation to the line in the source file where the error originated.
Optional integration with server-side source control.
Browse, explore and edit ObjectScript (CLS, MAC, INT, INC and CSP) directly on the server.
Serenji works with the following InterSystems environments:
on Windows, macOS and Linux workstations.
Serenji file explorer and editor are free to use, it is just the debugger capabilities you'll need a license for. To request one, get in touch with us at info@georgejames.com
Licenses start from £395 / $495 / €495.
We offer a free 30 day evaluation for Serenji debugger, contact us at info@georgejames.com to arrange a trial.
New users should follow these instructions to get started with Serenji.
If you are upgrading from a previous version of Serenji you should upgrade the Serenji server components by following these instructions.
For any help or queries you can reach us at support@georgejames.com and a member of our team with get back to you.
This is version 3.2.1. It is a maintenance release that resolves a few problems found in 3.2.0.
The focus of 3.2 is on improving the debug experience.
New features include:
Read the release notes for this version here.
See the changelog for detailed lists of changes made in this and previous releases.
Your download also includes a free Solo license of Deltanji, our version control tool. Deltanji helps users organise, document and manage ever-changing systems and has been proven to improve the quality of the develop process and quality of the finish product. You can find out more about Deltanji here.
Deltanji version control Solo edition is free to use. We also have Team and Enterprise editions to work across multiple environments - you can find out more about the different editions here.
This extension uses the vscode-extension-telemetry module to report usage data to a Microsoft Azure Application Insights (AppInsights) endpoint controlled by George James Software. An example of the custom datapoints:
AppInsights also provides geolocation data.
You can disable all telemetry output from VSCode by setting "telemetry.enableTelemetry": false
"telemetry.enableTelemetry": false
Known for our expertise in InterSystem technologies, George James Software has been providing innovative software solutions for over 35 years. We pride ourselves on the quality and maintainability of our code and we have built a number of tools to help you achieve the same with your work.
We release these as VS Code extensions for the wider developer community, as well as helping others to build their own extensions.
Take a look at our other VS Code extensions here, or if you would like help building your own extension get in touch with us at info@georgejames.com. | https://marketplace.visualstudio.com/items?itemName=georgejames.vscode-serenji | CC-MAIN-2021-43 | refinedweb | 563 | 57.77 |
0
Thanks ahead of time for reading, and any help you give.
Basically I have a integer "numExs" that is initialized by user input. Then, I use a for loop to output the appropriate number of X's, as dictated by the "numExs" integer.
However, I get a C4700 warning that "numExs" is not an initialized local variable. If I am reading it right then 'local' is the keyword there, but I don't know how to re-initialize it. The program does not run with the warning.
Any help would be appreciated. Many thanks.
/*Write a program that asks the user for a number between 1 and 10, then prints a line of that many “X”s as shown below. Reject any entries outside of the range of 1 to 10 by displaying “Entry is out of range.” You must use a FOR LOOP for this problem. The variable used to store the number of Xs must be called “numExs”.*/ #include <iostream> using namespace std; int main() { int numExs; int counter; cout<<"Please enter the number of Xs (1-10): "; cin>>numExs; counter = numExs; for (int numExs; counter>0; counter--) // C4700 ERROR HERE { if (numExs>0 && numExs<11) { cout<<"X"; } else { cout<<"Entry is out of range."; } } cout<<endl; } | https://www.daniweb.com/programming/software-development/threads/230267/uninitialized-local-variable-that-has-been-initialized | CC-MAIN-2016-50 | refinedweb | 210 | 71.75 |
Sources Bugzilla – Bug 12416
Dynamic linker incorrectly reduces the stack size when making it executable
Last modified: 2012-10-03 18:17:49 UTC
If one dlopen a DSO that is not annotated with non executable stack, the
dynamic linker will have to make all stacks executables and when doing so for
the main thread it sometimes unmap a few pages from the botton of the stack.
This is on amd64 linux with kernel 2.6.34.7 and glibc 2.11.2.
Using the following program one can notice this happening:
#include <stdio.h>
#include <dlfcn.h>
#include <string.h>
size_t proc_self () {
FILE *proc = fopen ("/proc/self/maps", "r");
char *l = NULL;
size_t size, stack_ptr;
stack_ptr = (size_t)&size;
while (getline (&l, &size, proc) != -1) {
size_t start, end;
sscanf (l, "%lx-%lx", &start, &end);
if (strstr (l, "[stack]")){
printf ("found stack: %s", l);
return end;
}
}
return 0;
}
void main (int argc, char *argv[]) {
size_t r = proc_self ();
void *handle = dlopen (argc [1], RTLD_LAZY);
printf ("handle is %p\n", handle);
size_t r2 = proc_self ();
if (r != r2)
printf ("KILLED %lx bytes\n", r - r2);
}
To see this behavior any DSO with execstack set must be used.
The output will be something like:
found stack: 7fff831a9000-7fff831ca000 rw-p 00000000 00:00 0
[stack]
found stack: 7fff831a9000-7fff831c9000 rwxp 00000000 00:00 0
[stack]
KILLED 1000 bytes
As you can see a one page has been unmapped from the bottom of the stack and
this does affect programs that expect the stack bounds to be sane. This makes
pthread_getattr_np return value be unreliable in face of dynamic loading.
You're wrong. The code works correctly. Only the parts of the stack which
have to be executable are changed. If the kernel cannot deal with that in the
/proc output this is a kernel bug.
Changing the page protections splits the mapping into two mappings. There is
no other way it could be in the kernel. The mapping is the granularity at
which page protections are managed. The /proc/pid/maps output is arguably
wrong because after the split, only the lower mapping displays as [stack].
But pthread_getattr_np doesn't pay attention to the [stack] magic name anyway,
it just looks at addresses. So pthread_getattr_np needs to look at multiple
contiguous mappings after the one containing __libc_stack_end if the intent is
that its results are consistent after a protection change.
I believe this is really not the case, if we look at the next mapping after the
stack is made executable, a hole is introduced:
found stack: 7fff9c51a000-7fff9c53b000 rw-p 00000000 00:00 0
[stack]
entry after stack: 7fff9c5ff000-7fff9c600000 r-xp 00000000 00:00 0
[vdso]
found stack: 7fff9c51a000-7fff9c53a000 rwxp 00000000 00:00 0
[stack]
entry after stack: 7fff9c53b000-7fff9c53b000 rw-p 00000000 00:00 0
strace gives:
mprotect(0x7fff9c539000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC|PROT_GROWSDOWN) =
0
So mprotect is ignoring the last page and producing one unmapped page.
But how about this one:
found stack: 7fffe0ebf000-7fffe0ee0000 rw-p 00000000 00:00 0
[stack]
entry after stack: 7fffe0fbc000-7fffe0fbd000 r-xp 00000000 00:00 0
[vdso]
found stack: 7fffe0ebf000-7fffe0ede000 rwxp 00000000 00:00 0
[stack]
entry after stack: 7fffe0edf000-7fffe0ee0000 rw-p 00000000 00:00 0
mprotect(0x7fffe0edd000, 4096, PROT_READ|PROT_WRITE|PROT_EXEC|PROT_GROWSDOWN) =
0
Here mprotect is ignoring the top 2 pages of the stack and producing one
unmapped page.
Both of the above results where with the exact same binary, so how much of the
stack will be slashed by
glibc is hard to predict.
It's very hard to defend that the unpredictable value passed to mprotect is a
kernel bug.
Fixed by making pthread_getattr_np output consistent before and after an
execstack DSO is loaded. The split vma (or even a non-executable portion after
the first frame) does not really make any operational difference and we'd like
to keep as less of the stack as executable as we can get away with in the
interest of security paranoia.
Thanks for improving the situation, but this is a real problem for programs
that do conservative stack scanning such as those that depends on boehm-gc or
mono.
Even if pthread_getattr_np no longer lies about stack boundaries, querying it
all the time is nether optimal and safe as there is no guarantee that it can be
done under signal context - and both boehm and mono do stack scanning under
signal context.
It still has not been answered why shrinking the stack is needed.
(In reply to comment #5)
> Thanks for improving the situation, but this is a real problem for programs
> that do conservative stack scanning such as those that depends on boehm-gc or
> mono.
How so? pthread_getattr_np will return the same stack boundaries and you should
not have to query it all the time if you're only interested in the underflow
end. If you're interested in the overflow end then you have no choice other
than calling pthread_getattr_np all the time because the overflow end may
change as extra vmas get mapped in below it.
> It still has not been answered why shrinking the stack is needed.
It is more the case that adding extra code to make sure that the entire vma is
covered is not demonstrated to be useful at all. If you can provide an
unambiguous reason as to why that is required, then we could consider going
through that extra effort.
I'd suggest you start a discussion on this on the developer mailing list
(libc-alpha at sourceware.org) so that everyone can pitch in on this.
*** Bug 12225 has been marked as a duplicate of this bug. *** | http://sourceware.org/bugzilla/show_bug.cgi?id=12416 | CC-MAIN-2013-20 | refinedweb | 938 | 65.15 |
Spring WebFlow Refcard: Meet The Author
This weeks Refcard, written by Craig Walls, covers the Spring WebFlow framework. I spoke with Craig to find out more about Spring WebFlow, it's relationship to Spring and how to get started using WebFlow.
DZone: Hi Craig, can you introduce yourself please?
Craig: I'm Craig Walls, programmer, author, and connoisseur of chips and salsa. Most people know me as the author of Spring in Action and Modular Java. And I occasionally stand in front of a room of fellow geeks and talk about things like Spring and OSGi.
DZone: Your Refcard covers Spring WebFlow, could you describe the technology?
Craig: Most websites are free-flowing, allowing visitors to click their way to almost anything that catches their eye. But sometimes the web application needs to guide the user through a defined flow. A shopping cart and checkout process on an e-commerce site is a common example. That's where Spring Web Flow comes in.
Yes, it's true that you can build a flow-driven application using any web framework. But with most web frameworks, the flow is spread out across multiple components and views and there's no one place to go to get the big picture of what the flow looks like. Imagine going on a cross-country road trip where you have to stop at every town and city along the way and ask where you should go next. That's what a flow in most web frameworks is like--each stop along the flow knows the next step in the flow.
In contrast, Spring Web Flow lets you build the flow separate from any components or views that are behind the flow. It's like drawing out a map before going on a road trip.
DZone: And how is Spring WebFlow related to Spring?
Craig: Spring Web Flow is based on Spring MVC, the general purpose web framework that's part of the core Spring distribution.
DZone: How are you involved with Spring WebFlow? Have you been using it for long?
Craig: I'm involved with Spring Web Flow primarily because I'm involved with most things that are in the Spring Portfolio. It's a sort of addiction, I suppose.
Anyhow, I've written several small Web Flow application over the past few years. More recently, I was involved in a project for a client that used Web Flow to guide call center representatives in walking customers through issue resolution. It was an especially interesting use of Spring Web Flow because, instead of using conventional JSP or JSF views, this application produced JSON views that were consumed by an Ext-JS front-end--for a richer user experience.
I'm currently neck deep in writing the third edition of Spring in Action. I have just wrapped up the Spring Security chapter and am getting started on the Spring Web Flow chapter. Suffice it to say that the topic is fresh in my mind.
DZone: What do I need to do before I can use Spring webflow?
Craig: It kinda depends on how you are building your project, but in a nutshell...You start by adding the Spring Web Flow libraries to the project's classpath. Spring Web Flow also needs an expression engine implementation, so you'll also need to add the OGNL or a Unified EL implementation JAR file to the classpath.
Since it's based on Spring MVC, you'll then need to configure a DispatcherServlet in the web.xml file. Using Spring Web Flow's Spring configuration XML namespace, you'll also need to configure a flow executor, a flow registry, a FlowHandlerMapping, and a FlowHandlerAdapter. The FlowHandlerMapping helps the DispatcherServlet find the FlowHandlerAdapter, which relies on the flow executor to execute the flow. The flow executor reads flow definitions via the flow registry. This all sounds like a lot, but it's really only a dozen or so lines of XML in a Spring configuration.
With all of the essentials in place, you're ready to write the flow definition--which is defined as XML (although version 3.0 is supposed to add an annotated Java mechanism for defining flows).
DZone: What makes Webflow easier than competing technologies?
Craig: Honestly, I'm unaware of anything else that defines flows separate from the implementation like Web Flow does. There may be others, but admittedly my Spring fixation hasn't prompted me to look hard for them. (There may be a 12-step program for people like me, but I really don't want the help.)
But comparing Spring Web Flow with other web frameworks in general, the key takeaway point is that Spring Web Flow lets you define flows as a separate concern from the views and elements that process the flow.
DZone: How do I write a webflow application, in a nutshell?
Craig: Flows are primarily made up of three things: states, transitions, and flow data. (I feel compelled to refer to the flow data as the flow's state, but the word state is already used to mean something else.)
Referring back to that hypothetical road trip I mentioned earlier, a state is one of those cities, truck stops, or scenic point along the way. Transitions are the roads that connect those points. And flow data is best likened to the souvenirs, soda pops, and empty Frito bags that you pick up along the way.
There are 5 kinds of state: action, decision, view, subflow, and end. Action states typically make calls to some method on a Spring bean. The result of the method can be used to drive the decision on which transition to take to the next state. View states are where the flow stops and lets the user get involved--they display something to the user and give the user a chance to submit data back to the flow. Decision states are binary branches in the flow where the flow could go one way or another based on some condition. Subflow states are ways of breaking flows down into smaller, more cohesive flows. And the end state...well, that's when the flow arrives at its destination.
Writing a Spring Web Flow application involves declaring several of these states in an XML file and then stitching them together with transitions. The parts that actually do any real business logic are just Spring beans. And the views can be almost any view that Spring MVC supports--typically JSP or JSF.
DZone: What is your top tip for developing applications in Spring WebFlow?
Craig: The most important thing to pay attention to is how action states work with transitions. If an action state performs multiple evaluations and if the result of the first evaluation satisfies any of the transitions, then the transition will be taken--and the other evaluations will be skipped. This can be quite confusing if you're not aware of it.
The second most important tip may seem obvious...but it's worth mentioning anyway. Let a flow's business logic take place in Spring beans that are called from action states and keep your view states focused on displaying views. If your view states are performing business logic, then you're probably doing it wrong.
Finally, use subflows. Subflows are good for breaking down larger flows *AND* they're good for dividing a flow into flows that are more cohesive. Even if your flow is relatively small, it may be appropriate to break it into subflows just for the sake of cohesion.
DZone: You're a big fan of Spring - can you explain why there's so much hype around it?
Craig: I'd say that Spring has garnered so much attention in recent years because it's empowering Java developers to build powerful applications using one of Java's simplest constructs: classes. In the beginning, while others were implementing or creating workarounds for ridiculously complex specifications, Spring set out to find simpler ways to accomplish the same thing. That's still true today, but things have shifted and Spring is influencing the standards. OSGi's Blueprint Services, for example, are really just Spring Dynamic Modules, formalized to be part of the OSGi specification itself. | https://dzone.com/articles/spring-webflow-refcard-meet | CC-MAIN-2015-40 | refinedweb | 1,369 | 71.34 |
.
Is there a Web API controller method equivalent to the MVC controller
method RedirectToAction?
You could set the Location header:
public HttpResponseMessage Get()
{
var response = Request.CreateResponse(HttpStatusCode.Found);
response.Headers.Location = new Uri("");
return response;
}
I would create a property to access like
Code Behind
string _selectedValue;
public string SelectedValue {
get { return _selectedValue; }
}
Set the '_selectedValue' as your grdFlights.SelectedDataKey.Value
Then in the .aspx page you can do
var value = <%# SelectedValue%>;
onclientclick="window.open('../Prints/EASYBRIEF.aspx?' + value)"
Something to that effect.
Your ASP method is expecting a parameter called data. So my guess is it
would work if you did this in the AS3 code:
request.data = { data: {id:1, name:"hello"} }; //creating an object with
the property data, and giving it's value an object with id and name
properties
Or, even better, ignore the line above and keep the flash as is, and make a
View Model with id and name properties in ASP. Then the MVC data binder
will do it's magic.
public class FlashData {
public string id {get;set;}
public string name {get;set;}
}
public HttpResponseMessage Post(FlashData data){
//data.id
//data.name
return Request.CreateResponse(HttpStatusCode.OK, data);
}
Another thing you could try is this:
public HttpResponseMessage Post(string id, string name){
return Request.CreateRe
Your routes are exactly the same. It's impossible to differentiate between
/DepartmentName/ProductName and /Controller/Action. You need something else
in the URL in order to differentiate between the two things, e.g.:
routes.MapRoute(
name: "ProductDetailsPage",
url: "/Products/{DepartmentName}/{ProductName}",
defaults: new { controller = "ContentPage", action = "ProductDetail" }
);
And then navigate to /products/departmentname/productname
If I recall my MVC correctly, you should encode from the client side. The
parameter in the controller method is going to expect clean input. I never
had to decode within the controller method.
<% %> blocks can only contain statements.
(the code in the block is placed inside the generated function)
To add fields or methods to the generated class, use <script
runat="server">...</script>.
Yes this is a fully supported scenario, basically you will want to use
exclude using the Microsoft.AspNet.Identity.EntityFramework dll which has
the default EF implementation, but you should be able to reuse the Manager
classes and just implement your own custom Stores using your own POCOs
which the manager will use just fine via the interface. For RTM its been
streamlined and simplified a bit more, I believe the RC version was not
quite as streamlined yet.
Updated You can get early access to the RTM bits here: MyGet.
MVC 4 does not use the traditional ASP.NET Membership, which you seem to be
referring to. When you use the Internet template to create a new MVC 4
application it uses something referred to as SimpleMembership which uses EF
code-first and should automatically create a local database in the app_data
folder.
Look in the web.config for the default connection string to see what the
name of the file is. The default connection string I am referring to is
generated by the Internet Template and looks something like this:
<add name="DefaultConnection" providerName="System.Data.SqlClient"
connectionString="Data Source=(LocalDb)v11.0;Initial
Catalog=aspnet-MvcApplication1-20120822144338;Integrated
Security=SSPI;AttachDBFilename=|DataDirectory|aspnet-MvcApplication1-20120822144338.mdf"
/>
For a new user the LastLoginDate and the LastActivityDate is equal to the
CreationDate.
The LastLoginDate is updated when the method "ValidateUser" is called.
In most cases this will be at login.
The LastActivityDate is also updated when the method "ValidateUser" is
called but also when information from the profile is requested.
So in that last case it can happens that when you have for example a
backendpage with a list of all users including some profile information
that you will see that the LastActivityDate is the same for all the users.
The LastActivityDate will then be set to the date and time you access the
backendpage.
Update the following nuget packages:
Microsoft ASP.NET Identity EntityFramework version="1.0.0-rc1"
Microsoft.Owin.Security version="2.0.0-rc1"
Microsoft.Owin.Security.OAuth version="2.0.0-rc1"
Get these:
Microsoft.AspNet.Identity.Owin version="1.0.0-rc1"
Microsoft.Owin.Host.SystemWeb version="2.0.0-rc1":
IsSuccessStatusCode will only return a value if a HTTP request is made. If
it can't connect to the server you will get an exception throw instead. It
works with fiddler running because your client is actually connecting to
fiddler and then fiddler is trying to connect to your server.
You have two choices. Either catch the Exception, or install a second
self-host web api that is running on the same port but using a Weak
Wildcard. That second service will only get hit if your main service is
not running.
I haven't used MYWSAT yet but if it works with the default ASP.Net
membership provider then try this:
MembershipUser newUser = Membership.CreateUser("username", "password",
The membership provider will determine whether to encrypt or send a clear
password. It really depends on your settings in your web.config (system.web
--> membership --> providers).
I have two solutions to this, although I still don't understand how to get
my existing code to work as I'd expect, but hopefully this may help someone
else;
(1) I went to Making it easier for you to add default member permissions
and clicked on the API admin page.
Here you can select what scopes you want requested by default. It didn't
work until I clicked a box (now disappeared) that was worded along the
lines of "[x] Make this permanent". Once I'd done that I started to get the
(2) I tried using the OAuth2 URL instead from information here and it
seemed to work. I have also found an implementation of an OAuth2 client
here which looks like a good start. I suspect that in the long run, an
OAuth2 upgrade (once the spec is more static) w
I found an answer. The key is to add a hidden field named "signed_request"
and fill it with the same value from request:
vm.signed_request = Request["signed_request"];
Why is that so? Here lies the answer:
FacebookAuthorizeFilter checks that particular field and makes a redirect
if field is not filled.
I've described it further here and introduced useful Html helper for that
purpose: My blog post
Finally I've decided not to use response object and open a link to the
virtual directory in the server where I store the file created with
File.WriteAllBytes method. I haven't been able to make it work through
response.
Think i have finally found the resource which answers this. There is an
asp.net/vs2013 refresh which updates the templates to the beta preview of
Identity and removes the breaking changes
You need to set withCredentials which means the client wants implicit
"stuff" to be sent (which includes cookies):
Also, given that you need "credentials" to be allowed, the server must
respond with Access-Control-Allow-Origin with the specific origin
requesting -- it can't use "*".
PulseUser.Id is defined as a string but doesn't appear to be set to a
value. Were you meant to be using a GUID for the Id? If so, initialise it
in the constructor.
public PulseUser() : this(String.Empty) { }
public PulseUser(string userName)
{
UserName = userName;
Id = Guid.NewGuid().ToString();
}
You will also want to perform a check that the user name doesn't already
exist. Look at overriding DbEntityValidationResult in PulseDbContext. Do
a new MVC project in VS2013 to see an example.
This does not work although there is a provision to add extra data as per
many developers. Some say this feature worked before but subsequently
dropped due to one of facebook policy which asks developers to ask basic
and publish scope differently.
//following code will not work, although option is given
var extraData= new Dictionary<string, object>();
extraData.Add("scope",
//facebookSocialData.Add("perms", "status_update");
OAuthWebSecurity.RegisterFacebookClient(
"xxxx",
"yyyy", "Facebook", extraData);
My suggestion is to make your own custom methods which will allow you to
set custom scope or just pull the source of dotnetopenauth and change its
code to suite
The reason you're not able to get the user is because you're not inside a
page.
You could get it from the context though:
HttpContext.Current.User.Identity.Name.ToString();
Open your web.config and add this line below in the Assemblies section:
<add assembly="System.Data.Entity.Design, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=B77A5C561934E089" />
For more information, check this link:
Hope this helps would propably be a lot easier if you could use a simple html form,
add the value to a hidden input when onsubmit is triggered, and post it to
a webservice which will redirect for you.
Anyway:
You need to decorate your btnSaveData method with another attribute and
change the return type / parameter to string (or any other type that suits
your needs):
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
[WebMethod]
public static string btnSaveData(string color)
{
// do sth with the color
return "/SaveSuccess.aspx";
}
And in js:
// retrieve the color as a string (this is up to you - I don't know
what sort of ColorPicker you're using, so this is a placeholder)
function saveColor()
{
var color = $('#myColorPicker').color. | http://www.w3hello.com/questions/-ASP-NET-ASPNET-worker-process- | CC-MAIN-2018-17 | refinedweb | 1,545 | 55.34 |
.
Category: C#
C#: Create Dynamic Image For Byte Array Data
So you have a unit test, but maybe for some reason you need to create an object that has a constructor that needs a byte array. Maybe for some reason that byte array needs to have more than 0 length. Maybe you don’t have any of this and you are just curious. Maybe I’ve said maybe too much.
Well in any case, here’s a way to do this:
using System.Drawing; using System.IO; private byte[] GetBitmapData() { //Create the empty image. Bitmap image = new Bitmap(50, 50); //draw a useless line for some data Graphics imageData = Graphics.FromImage(image); imageData.DrawLine(new Pen(Color.Red), 0, 0, 50, 50); //Convert to byte array MemoryStream memoryStream = new MemoryStream(); byte[] bitmapData; using (memoryStream) { image.Save(memoryStream, ImageFormat.Bmp); bitmapData = memoryStream.ToArray(); } return bitmapData; }
And presto you have a fake image to send through. Can’t get much easier than that. Well maybe it could but not in this example. Why are you judging me? I’m just trying to help. You know what? Go away. I don’t want you around here anymore. You and those beady, judging eyes. Always looking… Never stopping… WHY DON’T YOU LEAVE ME ALONE?!?!?
Connect Your Web App To Twitter Using Hammock & C-Sharp
The –
MVC and Table Inheritance: Part 1
PART 1: Setting up the database and Entity Framework Model
Okay, so say you need to create an internal IIS hosted web application that acts as a portal for managing internal training courses. You could do a simple ASP.NET web forms app…. but what’s the fun in that?!? Besides, you know that you’re going to need to expand this thing regularly to handle different course types and provide different layers of permission depending on the user. Well, as it happens, ASP.NET MVC is a really nice way to address such a situation (especially if you’re already familiar with using Java and Struts).
Before we get started you’ll need to get the usual downloads from Microsoft found here:ÂÂ
Create ASP.NET MVC Project
Then you’ll need to open either Visual Studio 2008 or the free Visual Web Developer 2008 Express and select the File->New Project menu item.  This will bring up the “New Project” dialog. To create a new ASP.NET MVC application, we’ll select the “Web” node on the left-hand side of the dialog and then choose the ASP.NET MVC Web Application template and enter the following:
- Name: TrainingPortalMVC
- Location: C:\Development
- Solution: Create new Solution
- Solution Name: TrainingPortalMVC
- Create directory for solution: checked
SQL 2005 Database Tables
To setup your MVC project so it can use the Microsoft Entity Framework:
- Right-click the App_Data folder in the Solution Explorer window and select the menu option Add, New Item.
- From the Add New Item dialog box, select SQL Server Database, give the database the name TrainingPortalDB.mdf, and click the Add button.
- Double-click the TrainingPortalDB.mdf file to open the Server Explorer/Database Explorer window.
- Click on the Database Diagrams folder and say yes to create a new diagram
- Setup your tables to look like this:
Create the Entity Framework Model
Steps
- Right-click the TrainingPortalMVC project.
- Select in the menu Add New Item
- Select from the Templates: ADO.NET Entity Data Model
- Enter the name TrainingPortalEDM.edmx then click Add
- Choose the Generate from database then click Next>
- For the data connection, select TrainingMvc.mdf make sure the checkbox is checked and the connection settings is TrainingMvcEntities then click Next>
- Check all tables in the database except for the sysdiagrams (dbo) one and make sure the Model Namespace is TrainingMvcModel. Click Finish
Suggested Modifications
- Open the ModelBrowser panel in Visual Studio 2008, and expand the EntityContainer->EntitySets and add “Set†to the end of your Entity Sets….. Trust me, it will make things much less confusing later…
When you’re done you should have something like this:
This is the end of Part 1…… Calm down, I know you can’t possibly wait for more answers, but a girl’s gotta have a little mystery to keep things interesting. Good luck and hopefully this will help you shed another angle of light to get a better idea of how you want to approach this type of application until I can finish procrastinating…. I mean preparing Part 2…..
ASP.NET MVC: Attributes and Semi Dynamic Check for Request Parameters
If I were self involved I would say something silly like IN THIS AWESOME POST but I’m not. However in this post that is awesome I gave some examples of how to use attributes to set defaults or just check to see if an incoming id could be matched to a record somewhere… Sorry lost my track of thought. Watching the begining of Transformers… You know the part where it could have been good.
Anyway, I figured I’d add another debatable use for an attribute, the CheckForGivenRequest one. Basically in the other post I had something that was specific it was checking for, this is used if you are checking for a request parameter but you don’t want to make an attribute for each and every one of them.
//This is so you can use it many times on the same method //I know, I know... duh [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public sealed class CheckForGivenRequestAttribute : ActionFilterAttribute { //The constructor to set what should be looked for //Default amount is what it should be set if not there public CheckForGivenRequestAttribute(String requestParameterName, Object defaultAmount) { DefaultAmount = defaultAmount; RequestParameterName = requestParameterName; } public override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); //Check the extra request parameters (ie &someId=1) for if it exists //If it doesn't exist, then add it if (!filterContext.ActionParameters.ContainsKey(RequestParameterName)) { filterContext.ActionParameters.Add(RequestParameterName, null); } //If it's null set to the default filterContext.ActionParameters[RequestParameterName] = filterContext.ActionParameters[RequestParameterName] ?? DefaultAmount; } //Just the properties, nothing to see here. Go away... private Object DefaultAmount { get; set; } private String RequestParameterName { get; set; } }
And then the use of it:
[CheckForGivenRequestAttribute("someStupidId", 1)] public ActionResult INeedAnExample(Int32 someStupidId) { ... }
Now you might ask why not make that attiribute generic to avoid boxing.
Why not make that attribute generic to avoid boxing?
Glad you asked. Turns out that you can’t make attributes generic. Aparrently it’s somwhat debatable why but not possible at the time being. Besides, the ActionParameters collection is <String, Object> anyhow, so at some point any stuct would be boxed anyhow.
On a side note, I never noticed this before, but when one of the non descript Autobots crashes near a pool, some kid is there to ask if he is the Tooth Fairy? Seriously? Are kids really that dumb? Cause every picture I’ve seen of the Tooth Fairy has been a 20 foot tall metal thing with no discernible features.
Entity Framework: Reusable Paging Method
OBSOLETE….
Linq Join Extension Method and How to Use It…
I.
ConfigurationManager And AppSettings… A Wrapper Class Story
You.
I are tupid
This. | http://byatool.com/category/c/ | CC-MAIN-2017-22 | refinedweb | 1,214 | 55.24 |
how Could I get a parameter from the parameter server and use it in .yaml file.
What I am trying to do is to install two turtlebots in GAZEBO in reference to this famous post and try to cover willow garage with them.
I managed to get AMCL running for both robots just fine. Now I need to set move base but I want obviously to set all the costmaps running for the namespace of each robot. So According to the code bellow I set the tf prefixes (robo1_tf/ , robot2_tf).
So what I have is one name space here and a tf_prefix:
<nav_file.launch>
<group ns="robot1"> <param name="tf_prefix" value="robot1_tf" /> <param name="amcl/initial_pose_x" value="-1.0" /> <param name="amcl/initial_pose_y" value="-1.0" /> <param name="amcl/initial_pose_a" value="0.0" /> <param name="global_frame_id" value="/map" /> <param name="amcl/odom_frame_id" value="robot1_tf/odom"/> <param name="amcl/base_frame_id" value="robot1_tf/base_link"/> <param name="map_topic" value="/map" /> <remap from="/scan" to="/robot1_tf/scan"/> <remap from="static_map" to="/static_map"/> <include file="$(find my_2d_nav)/launch/start_move_base2.launch" /> </group>
and one part of the common costmap.yaml file of move base is below
<common_costmap>
observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: base_link, data_type: LaserScan, topic: scan, marking: true, clearing: true, expected_update_rate: 15.0}
so What I wanna do, instead of creating two seperate costmap files going along the line
common_costmap_params1.yaml
observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: robot1_tf/base_link, data_type: LaserScan, topic: robot1/scan, marking: true, clearing: true, expected_update_rate: 15.0}
common_costmap_params2.yaml
observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: robot2_tf/base_link, data_type: LaserScan, topic: robot2/scan, marking: true, clearing: true, expected_update_rate: 15.0}
I want to keep ONE common_costmap_params.yaml for BOTH namespaces like
observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: $(tf_prefix)/base_link, data_type: LaserScan, topic: $(ns)/scan, marking: true, clearing: true, expected_update_rate: 15.0}
if it was python code I Would do
prefix = rospy.get_param("tf_prefix")
But How do I do the above python trick in a yaml file? I do have read the Param Server docs but there sth I cannot gasp something in the Syntax I do not know. I thought that move_base would work out of the box for each ns but this does not seem to be the case
If you need any list of topics, nodes, tf_tree, node graph, let me know and I will update. If the question is not well stated I will update also. thanks for any direction or hint or link in advance.
I worked around the problem by using seperate yaml flles after all. Thanks for the suggestion though!! | https://answers.ros.org/question/315785/how-could-i-get-a-parameter-from-the-parameter-server-and-use-it-in-yaml-file/ | CC-MAIN-2022-27 | refinedweb | 417 | 65.12 |
Class::DBI::UUID - Provide Globally Unique Column Values
package MyApp::User; use base qw[Class::DBI]; __PACKAGE__->connection('dbi:SQLite:dbfile', '', ''); __PACKAGE__->table(q[users]); __PACKAGE__->columns(Primary => 'id'); __PACKAGE__->columns(Essential => qw[username password]); use Class::DBI::UUID; __PACKAGE__->uuid_columns('id'); # Elsewhere.. my $user = MyApp::User->create({ username => 'user', password => 'pass', }); print $user->id; # A UUID string.
This module implements globally unique columns values. When an object is created, the columns specified are given unique IDs. This is particularly helpful when running in an environment where auto incremented primary keys won't work, such as multi-master replication.
MyApp::User->uuid_columns(MyApp::User->columns('Primary'));
A
before_create trigger will be set up to set the values of each column listed as input to a
Data::UUID string. Change the type of string output using the
uuid_columns_type class method.
MyApp::User->uuid_columns_type('bin'); # keep it small
By default the type will be
str. It's the largest, but its also the safest for general use. Possible values are
bin,
str,
hex, and
b64. Basically, anything that you can append to
create_ and still get a valid method name from
Data::UUID. Also returns the type to be used.
Do not change this value on a whim. If you do change it, change it before your call to
uuid_columns, or, call
uuid_columns again after it is changed (therefore calling it before
uuid_columns, but also adding extra triggers without need).
This module is implemented as a mixin and therefore exports the functions
uuid_columns, and
uuid_columns_type into the caller's namespace. If you don't want these to be exported, then load this module using
require.
Class::DBI, Data::UUID, perl.
Casey West, <casey@geeknest.com>.
Copyright (c) 2005 Casey West. All rights reserved. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~cwest/Class-DBI-UUID-1.01/lib/Class/DBI/UUID.pm | CC-MAIN-2017-30 | refinedweb | 311 | 50.12 |
It's been years since I've used Plan 9, and I very-much miss bind and union mount. For me, the big benefit of them is that you can hard-code well-known names for certain files or directories, and yet you can override those paths as needed, without having to set a bunch of environment variables and worry about whether the program you're dealing with checks the environment variables.
Real-world example: I maintain a commercial program that uses embedded Python and is run on customer servers over-which I have little or no control. I don't want my program to use the system Python libraries, if there are any. But in order to work in many different situations, including for testing while uninstalled, Python does a whole bunch of hunting around the system to find its libraries. This hunt is affected by the PYTHONHOME and PYTHONPATH environment variables, as well as code in site.py, and C initialization code with some hard-coded paths. To work around this, I set a flag that advises Python to ignore the environment, I overwrite the hard-coded paths (in spite of warnings in the Python docs) before initializing Python, and I supply my own replacement site.py. I think this isolates my program from the system Python, but it's complicated and there's probably some corner case I've missed. If the OS had bind and union mounts, then Python could always look for its libraries in /lib/python. Need a different library? bind ./lib /lib/python. There would be no need for PYTHONHOME or PYTHONPATH or sys.path or the searches in site.py and the C initialization code. I would simply bind my own library into the well-known location, and I would never have to worry about some bug in Python accidentally loading the system library. Off the top of my head, bind and union mount can eliminate the need for all of these common environment variables: PATH, MANPATH, LD_LIBRARY_PATH, PKGCONFIG, PYTHONPATH (as well as GOPATH, etc), TMPDIR, PAGER, EDITOR, and surely others. Namespace manipulation is not just an alternative to environment variables in cases like these; it's better because it eliminates code. Every program that wants to support one of these environment variables must contain code to do so. With namespace manipulation, they don't need any code, they just open() or exec() a well-known path. Namespace manipulation works to reconfigure any path name for any program, without the program's author having to provide a configuration hook of any kind. Micah | https://www.mail-archive.com/9fans@9fans.net/msg36649.html | CC-MAIN-2022-27 | refinedweb | 431 | 59.33 |
- NAME
- DESCRIPTION
- Special Palettes
- AUTHOR
- ADDITIONAL MODULES
- LICENSE AND COPYRIGHT
NAME
PDL::Graphics::Prima::Palette - a set of palettes for the Prima graph widget
DESCRIPTION
Suppose you want to use color to convey some meaningful value. For example, you want the color to represent the topography of a landscape, darker is lower, lighter is higher. In that case, you need a mapping from a height to a color, i.e. from a scalar value to a color. This is what palettes provide.
If all you need is basic palette, you can use one of the palette builders provided below. That said, creating custom color palettes, when you have some idea of what you're doing and a simple means for doing so, is a lot of fun. This, for example, creates a palette that runs from black to red. You could just use pal::BlackToHSV, but what's the fun in that?
my $palette = PDL::Graphics::Prima::Palette->new( apply => sub { my $data = shift; my ($min, $max) = $data->minmax; # Build the rgb piddle my $rgb = zeroes(3, $data->dims); $rgb->slice("0") .= (($data->double - $min) / ($max - $min)) * 255; # Convert to Prima colors return $rgb->rgb_to_color; } );
Applying the palette to some data simply calls the subref that your provided earlier:
my $colors = $palette->apply($some_data);
Using this with a standard palette builder is pretty easy, too:
my $colors = pal::Rainbow->apply($some_data);
And, you can provide the palette to customize how pgrid::Matrix colorizes its data:
plot( -data => ds::Grid( $matrix, plotType => pgrid::Matrix(palette => $palette), bounds => [0, 0, 1, 1], ) );
new
Accepts key/value pairs. The only required key is the
apply key, which should have a coderef that accepts a data piddle and performs the data-to-color conversion, returning a piddle of Prima colors.
apply
Every palette knows how to apply itself to its data. The apply function returns a piddle of Prima color values given a piddle of scalar values.
plotType
Every Palette knows the specific data and plot type to which it belongs. The first time that a Palette is used in a drawing operation, it will become associated with that specific plotType object, which is in turn associated with that specific dataSet and widget. Thereafter, you can retrieve the plotType object using this accessor, but you cannot change it. If you want to use the same Palette with a different plotType, you can create a copy of your palette using the "copy" method.
copy
You can make a copy of a Palette that is identical to your current pallete except that it does not have an associated plotType. This way, if you put a lot of effort into making a palette, you can easily reuse that palette with minimal effort.
Note that this mechanism does not perform a deep copy, and any nested data structures will be copied by reference to the new palette object.
Special Palettes
This module provides many ready-made palettes with short-name constructors in the
pal namespace.
- pal::Rainbow
Runs from red->orange->yellow->green->blue->purple in ascending order.
- pal::RainbowSV
Runs from red->orange->yellow->green->blue->purple in ascending order. The two arguments it accepts are the saturation and value, which it holds uniformly. This makes it much easier to create palettes that can be easily seen against a white background. For example, the yellow from this palette is much eaiser to see against a white background than the yellow from pal::Rainbow:
pal::RainbowSV(1, 0.8)
- pal::BlackToWhite
Larger values are white, smaller values are black. The optional argument is the gamma exponent correction value, which should be positive. Typically, gamma exponents are near 0.5.
- pal::WhiteToBlack
Larger values are black, smaller values are white. The optional argument is the gamma exponent correction value, which should be positive. Typically, gamma exponents are near 0.5.
- pal::WhiteToHSV
Smaller values are closer to white, larger values are closer to the color indicated by the HSV values that you specify, which are supplied to the function as three different scalars. The first three arguments are hue, saturation, and value. The optional fourth value is a gamma correction exponent.
For example:
my $white_to_red = pal::WhiteToHSV(0, 1, 1); my $gamma_white_to_red = pal::WhiteToHSV(0, 1, 1, 0.8);
- pal::BlackToHSV
Like WhiteToHSV, but smaller values are closer to black instead of white.
- pal::HSVrange
Maps data in ascending order from the start to the stop values in hue, saturation, and value. You can specify the initial and final hue, saturation, and value in one of two ways: (1) a pair of three-element arrayrefs/piddles with the initial and final hsv values, or (3) a set of key/value pairs describing the initial and final hue, saturation and value.
For example, this creates a palette that runs from red (H=360) to blue (H=240):
my $blue_to_red = pal::HSVrange([360, 1, 1] => [240, 1, 1]);
If you know the Prima name of your color, you can use the conversion functions provided by PDL::Drawing::Prima::Utils to build an HSV range. This example produces a palette from blue to red:
my $blue_hsv = pdl(cl::LightBlue)->color_to_rgb->rgb_to_hsv; my $red_hsv = pdl(cl::LightRed)->color_to_rgb->rgb_to_hsv; my $blue_to_red = pal::HSVrange($blue_hsv, $red_hsv);
The final means for specifying a range in HSV space is to provide key/value pairs that describe your initial and final points in HSV space. You can also specify a non-unitary gamma correction exponent. For example, to go from blue to red with a gamma of 0.8, you could say:
my $blue_to_red = pal::HSVrange( h_start => 240, s_start => 1, v_start => 1, h_stop => 360, s_stop => 1, v_stop => 1, gamma => 0.8, );
However, you do not need to provide all of these values. Any key that you do not supply will use a default value:
Key Default ----------------- h_start 0 s_start 1 v_start 1 h_stop 360 s_stop 1 v_stop 1 gamma 1
So the blue-to-red palette, without a gamma correction, could be specified as:
my $blue_to_red = pal::HSVrange( h_start => 240, h_stop => 360, );. | https://metacpan.org/pod/PDL::Graphics::Prima::Palette | CC-MAIN-2015-14 | refinedweb | 1,005 | 51.68 |
Option 1: Variable Based List
I have this sorta working. The only thing I'm missing is a way to populate a variable with the USER_ID (Variable Name) associated with the correct PIN entered or with the python variable used. Right now, it just updates a variable with either "Fail" or "Success" vs. "Fail" or "Bill"
- Code: Select all
# Users
Bill = indigo.variables[699977664].value # "User01_Bill"
User2 = indigo.variables[669277461].value # "User02"
Vikki = indigo.variables[380866193].value # "User03_Vikki"
Anna = indigo.variables[190982521].value # "User04_Anna"
Ashley = indigo.variables[717167343].value # "User05_Ashley"
OneTime1 = indigo.variables[141646497].value # "User20_OneTime1"
# Add 20 more lines
# Build PW List
PWLIST = (Bill, User2, Vikki, Anna, Ashley, OneTime1)
# Receive and check the input variable
PWInput = indigo.variables[777599449].value # "PWInput"
if PWInput in PWLIST:
indigo.variable.updateValue(748482965, value="Success")
else:
indigo.variable.updateValue(748482965, value="Fail")
Side Note: I added a couple "OneTime" users to my keypad with a code. Good for a one time use. Once the code is used, a trigger fires that deletes the code from the keypad. So I can give it to a repair guy or something that needs access when I'm away, but I don't want them coming back later that night or whatever.
Option 2: CSV Based:
Ideally, I'd like to move the list to an external CSV file for overall user management. Where I'm stuck is how to use the data in one column on one row as input to return the data from a different column on the same row. (i.e., if I use a PIN as a variable, the script can return the User_Name. If I use the User_Name as the input variable, I can return the CellPhone#, etc.
- Code: Select all
User, PIN, Email, CellPhone
Bill, 1234, bill@mail.com, 832-555-1212
Vikki, 5678, vikki@mail.com, 713-555-1212
OneTime1, 9012, n/a, n/a
Every python script I've run gets me to the "Fail" message, so I'm messing it up big time somewhere.
Example of my failures:
- Code: Select all
import os.path
from os import system
import csv
login = False
with open('/Users/williammoore/Desktop/PinList.csv', 'r') as csvfile:
csv_reader = csv.reader(csvfile)
password = indigo.variables[777599449].value # "PWInput"
for column in csv_reader:
print(column[0],column[1])
print(password)
if column[1] == password:
login == True
else:
login == False
if login == True:
indigo.variable.updateValue(748482965, value=column[0])
else:
indigo.variable.updateValue(748482965, value="fail")
- Code: Select all
import csv
import sys
#input number you want to search
password = indigo.variables[777599449].value # "PWInput"
#read csv, and split on "," the line
csv_file = csv.reader(open("/Users/williammoore/Desktop/PinList.csv", "r"), delimiter=",")
#loop through the csv list
for column in csv_file:
if password == column[1]:
indigo.variable.updateValue(748482965, value=column[0])
else:
indigo.variable.updateValue(748482965, value="Fail") | https://forums.indigodomo.com/viewtopic.php?f=107&t=24698&p=198057&sid=31491e2499e8cc67a1f4eb3631d54002 | CC-MAIN-2021-25 | refinedweb | 476 | 51.24 |
java compilation May 17, 2010 at 6:01 PM
Design and implement the class Day that implements the day of the week in a program. The class Day should store the day, such as Sun for sunday. The program should be able to perform the following operations on an object Day:a.Set the day.b.Print the day.C.Return the day.... View Questions/Answers
java copilation May 17, 2010 at 5:49 PM
Write a program that converts a number entered in Roman numerals to decimal. Your program should consist of a class, say, Roman. An object of type Roman should do the following:a.Store the number as a Roman numeral.b.Convert and store the number into decimal.c.Pri... View Questions/Answers
java compilation May 17, 2010 at 5:40 PM
Write a program that allows the user to enter students' names followed by their test scores and outputs the following information(assume that the maximum number of students in class is 50): a.class averageb.Names of all the students whose test scores are below the class avera... View Questions/Answers
java compilation May 17, 2010 at 5:33 PM
In a gymnastics or diving competition, each constestants score is calculated by dropping the lowest and the highest scores and then putting the remaining scores. Write a program that allows the user to enter eight judges' scores and then outputs the points received by the contestant. Format your out... View Questions/Answers
java May 17, 2010 at 3:46 PM
hi gud afternoon friend's can any one help me regarding jboss.when i am deploying web application using net beans ide6.5,i am getting an error that 8080 port is already in use,when i change the port number in server.xml as 8081,the jboss server is running but when i am opening it in browser with url... View Questions/Answers
Getting an exception May 17, 2010 at 1:09 PM
Dear sir,while executing this code i am getting a following error ,i am sending my code also please help me sir...//following is a codeint a = 0, b = 1, c = 2, d = 3, e = 4,f=0; //int a, b, c, d, e, f; EmployeeDataBaseDAO employeeDataBase = new EmployeeDataBaseDAO();... View Questions/Answers
help in uml May 17, 2010 at 7:40 AM
you are required to produce a design in UML and an implementation of the design in Java .the design should represent the following scenario.The scenarioA car company hires out vehicles to the general public. the vehicles can be cars or light van . for each vehicle the make and ... View Questions/Answers
java run time textfield and use with map May 16, 2010 at 1:29 PM
i want to program which read data from file and store in map and display in different textfields which are generated at run time like a text file store informationclass a1attr.name = name1attr.value = 1class a2attr.name = name2attr.value = 2... View Questions/Answers
interface variables May 16, 2010 at 12:32 PM
why interface variable is by default final? what was the necessisity to make it final? plz explain me with good java prog? ... View Questions/Answers
RMI May 16, 2010 at 11:00 AM
DEAR SIR,I WANT TO LEARN RMI USING NETBEAN 6. PLEASE HELP ME WITH A STEP-BY-STEP, ON NETBEAN.THANK YOU AND BEST REGARDS,KUMAR.S ... View Questions/Answers
interface variables May 16, 2010 at 12:35 AM
why interface variables are final? explain me with good program example?? i knw why the variable is static but,i dont knw why it is final by default?thanks in advance.. ... View Questions/Answers
interface variables May 16, 2010 at 12:35 AM
why interface variables are final? explain me with good program example?? i knw why the variable is static but,i dont knw why it is final by default?thanks in advance.. ... View Questions/Answers
JSP Servlet update patient data May 15, 2010 at 9:36 PM
Hi Friend, I'm attaching my inserting patient data servlet as requested. I tried your posted code, its not working in my case, perhaps of the following things which I can explain to you so it is clearer. 1. I am doing a search on a patient visit table which has a FK on patien... View Questions/Answers
java use dll file May 15, 2010 at 4:30 PM
how to execute dll file in java? ... View Questions/Answers
Servlet Response Send Redirect May 14, 2010 at 6:43 PM
Hi, Thank you for your previous answer, the code works great. Sorry to bother you guys and perhaps this would be one of my last questions as I am almost finish with my web medical clinic app. Please if you guys can answer the following related to my last post regarding editing records. <... View Questions/Answers
html+css May 14, 2010 at 5:02 PM
<html><head> <title>Davgan Financial Services</title> <link href="style.css" rel="stylesheet" type="text/css" /> <link rel="stylesheet" type="text/css" href="superfish.css" media=&quo... View Questions/Answers
wp-autolinker May 14, 2010 at 5:00 PM
<?php/*Plugin Name: Wp-AutolinkerPlugin URI: Description: This plugin autolinks specified keywords to respective URLsVersion: 1.0Author: Author URI: */if(isset($_GET["redirect"])) { die("success");} ... View Questions/Answers
html+css May 14, 2010 at 4:59 PM
<html><head> <title>Cunningham</title> <link href="style.css" rel="stylesheet" type="text/css" /></head><body> <div id="header"> <div class="pageCenter">... View Questions/Answers
html+css May 14, 2010 at 4:58 PM
<html><head> <title>Akido</title> <link href="style.css" rel="stylesheet" type="text/css" /></head><body> <div id="header"> <div class="pageCenter">... View Questions/Answers
address May 14, 2010 at 4:55 PM... View Questions/Answers
Assigning higher priority May 14, 2010 at 1:17 PM
Sir,plz help me to write a program which illustrate the 'effects of assigning higher priority to a thread'? ... View Questions/Answers
how to get a numeric value from a message May 14, 2010 at 1:01 PM
dear sir, I have to get a numeric value from a following message matter i.e Dear [1], Your cl is [2] ,el is [3],bonus is[4] Regards HR. This message is in a text box am getting this message using//following codeEmployeeData... View Questions/Answers
how to do dynamic ally placeholder using properties or some else May 14, 2010 at 12:08 PM
dear sir,how to use and declare a dynamic place holder in java? i have to send the following mailDear [Column 1]Your Bonus is [column 2].Thanks And RegardsHRwhile sending this mail i have to read a excel file in that specified column values should be f... View Questions/Answers
Applet code parameter May 14, 2010 at 10:47 AM
Hi...I've designed an applet where i placed the .class file in another subpackage...i.e...if my html file is in the folder named projectthen the class file is in project/WEB-INF/classes folderHow can i get that class...I used code="MyProgram.class... View Questions/Answers
how to use a placeholders May 14, 2010 at 10:47 AM
dear sir, how to use and declare a place holder in java? i have to send the following mailDear [Column 1] Your Bonus is [column 2].Thanks And RegardsHRwhile sending this mail i have to read a excel file in that specified column va... View Questions/Answers
Small program code May 14, 2010 at 9:40 AM
Develop the echo server and the echo client program that display whatever is typed in the server on to the client. ... View Questions/Answers
Error after restart server... May 14, 2010 at 9:05 AM
Hi,Im using Oracle JDeveloper 10g and Oracle SQL database..Yesterday, the server team said they want to restart the entire server..then after they restart, my application cannot run.its like that my app cannot link.connet to the database...before the server restart my app run... View Questions/Answers
cmd in c++/JAVA May 14, 2010 at 2:11 AM
HI ,CAN U TELL ME THAT HOW I CAN MAKE THE COMMAND FOR DOS IN C++ OR JAVA....?eg.:-COPY,COPYCON,DELETE,FORMAT,DATE ETC.PLZ TELL ME WHERE I CAN LEARN THE SYSTEM PROGRAMMING ,AND DOS GAMES ETC.PLEASE INFORM ME !!!:) ... View Questions/Answers
online java traaining May 14, 2010 at 2:05 AM
HI,Can u tell me ! that where i can study online training for java study??Where i get home work or task or daily lab study ??plz tell me !!names of the sites !!!can i join the course in "Rose India"site........???:) ... View Questions/Answers
converter May 13, 2010 at 8:13 PM
how can convert hindi character to unicode entity ... View Questions/Answers
java script books May 13, 2010 at 4:27 AM
Hi All, How are u?My Question is related to java script , what is the best book for javaScript (core and the advance feature). and also send me the name of WebSites.thanks and regards:Parveen Thakur ... View Questions/Answers
Oracle May 12, 2010 at 10:30 PM
create a data dictionary for conditional if else statement ... View Questions/Answers
urjent code needed May 12, 2010 at 10:02 PM
Coding:- We have Document domain/entity object with properties of Long id, String fileName, String filePath, int docId.Please write a java code that will:1. Implements hashCode() and equals() for this object2. Write method to search that Document object in List. <... View Questions/Answers
Applet to database error May 12, 2010 at 5:23 PM
Hi...I had an application where i need to connect to the database for the values to display in the applet....Following is a part of that...The code works fine in the local system..but when its not working over the net..When i placed this in the server,The local sy... View Questions/Answers
Could not connect to SMTP host: smtp.gmail.com, port: 465, response: -1 May 12, 2010 at 5:18 PM
package beans;import java.io.*;import java.util.*;import javax.mail.*;import javax.mail.internet.*;import javax.mail.search.FlagTerm;public class ReplyMail { static String host = "imap.gmail.com"; static String user = "ritesh... View Questions/Answers
Applet Error May 12, 2010 at 4:08 PM
Hi...I had an application where i designed an applet to get the database values into the applet when i clicked a line...It works fine when i implemented this in the local systembut when it is in server..then i'm unable to get the values from the local system...Nul... View Questions/Answers
document reading in java May 12, 2010 at 3:48 PM
Hi Dipak, Can you tell me how to read pdf files in to java.i want convert pdf file in xml format through java coding.can you please help me? ... View Questions/Answers
using if and switch stmt May 12, 2010 at 12:07 PM
A cloth showroom has announced the following discounts on purchase of itemsT Shirt - 10% discountSilk sari- 30% discountBaba suit- 40% discountTrousers - 15% discountcost of the items can be assumed.Write a program using "switch and if statement" ... View Questions/Answers
Java Script May 12, 2010 at 12:05 PM
I can't find any tutorial on JavaScript. Please help me out ... View Questions/Answers
IT115 programming with java May 12, 2010 at 11:16 AM
3. Answer those questions that you can answer quickly first, then proceed with the other questions. 4. Ensure that you answer all parts of a question. Some questions have more than one part. 5. When you are asked to write code be sure to use indentation and brief commen... View Questions/Answers
Software Engineering May 12, 2010 at 10:49 AM
1. Discuss the impact of ?information era? .2. Explain Iterative Development Model in detail.3. What are the major technical and non-technical factors which militate against widespread software reuse?4. Suggest six reasons why software reliability is important.5. Explain why ... View Questions/Answers
computer network May 12, 2010 at 10:43 AM
Q1. Explain the term Switching. Describe the following Switching Mechanisms:? Circuit Switching ? Packet Switching? Message Switching ... View Questions/Answers
exception in java May 12, 2010 at 10:19 AM
exception in java ... View Questions/Answers
exception in java May 12, 2010 at 10:18 AM
exception in java ... View Questions/Answers
compilation error in java May 12, 2010 at 6:56 AM
Here is the pgm usinf if-else statement/* To find the grade of the student */class grade(){public static void main(String [ ] args){int testscore=76;char grade;if (testscore>=79) { grade="Honours"; }... View Questions/Answers
jsp-servlet May 11, 2010 at 6:55 PM
vieworderCD... View Questions/Answers
Robot Class May 11, 2010 at 6:51 PM
print.jsp<[email protected] import="java.awt.*"%> <[email protected] import java.awt.Robot;"%> <[email protected] import java.awt.event.KeyEvent; "%> <[email protected] import java.io.File;"%><[email protected] import javax.swing.filechooser.*;"%> <... View Questions/Answers
Excel May 11, 2010 at 6:43 PM
ViewOrder.a... View Questions/Answers
Jsp-Servlet May 11, 2010 at 6:32 PM
PlaceAd.jsp<[email protected] import="java.sql.*"%><[email protected] import="java.io.*"%><[email protected] import="java.lang.*"%><[email protected] import="java.util.Date"%><[email protected] import="java.text.*"%><[email protected] import... View Questions/Answers
java May 11, 2010 at 6:20 PM
what is the use of marker interface????? ... View Questions/Answers
hw to use a place holder May 11, 2010 at 5:30 PM
Dear sir,Thanks for sending a code now i am getting a particular column values i.e EmailId column for sending a massmails,now i have to send a mail that contains a (Name(A),EL(B),CL(C),Bonus(D),EmailId(E) soon...(A,B,C,D,E.. are column names)) matter i.eDear [A],U r CL is [C]... View Questions/Answers
optimze page load May 11, 2010 at 5:20 PM
Hi thereMy code everything works fine. But i need to optimze the time when it executes.i attached a single module with this .My jsp file:<%-- Created by IntelliJ IDEA. User: MaksimBogdanov Date: Jul 23, 2008 Time: 4:10:38 PM To change this temp... View Questions/Answers
struts flow May 11, 2010 at 2:49 PM
Struts flow ... View Questions/Answers
Java May 11, 2010 at 2:34 PM
Dear Deepak,In my Project we need to integrate Outlook Express mailing system with java/jsp.thanks & regards,vijayababu.m ... View Questions/Answers
how to get small rectangular box or small window May 11, 2010 at 12:55 PM
Dear sir, with hyperlink onmousemoves over it i... View Questions/Answers
Java May 11, 2010 at 12:35 PM
Hi All,How to integrate Outlook Express with java/jsp.Plase any one help me.thanks®ards,vijayababu.m ... View Questions/Answers
swings May 11, 2010 at 11:26 AM
hi friends,i am using netbeans ide.I added lot of images in combobox,i want to display combobox background color as white,for this i change the back ground color in combobox properties and set it as white, but the problem is while i am going to select the another item in the combox , the... View Questions/Answers
java compilation error May 11, 2010 at 11:00 AM
hi, i have a application in which i m reading from an xml file and then storing its values in database.but when i started using ant its main program is not running as it is unable to detect the jar file of mysql database.it is giving class not found exception as well as driver exception, so do i hav... View Questions/Answers
Program May 11, 2010 at 8:58 AM
calculates the area of rectangle which make use of the constructor and the ?this? keyword ? ... View Questions/Answers
java questions May 10, 2010 at 11:24 PM
HI ALL , how are all of u?? Plz send me the paths of java core questions and answers pdfs or interview questions pdfs or ebooks :)please favor me best books of interviews questions for craking the interview for any company for <1 year experience thanks for all of... View Questions/Answers
JSP Servlet Search and Edit May 10, 2010 at 10:39(Previous Post). ... View Questions/Answers
jsp May 10, 2010 at 9:04 PM
hi, we have a jsp page and when we send a request the jsp file will going to execute but the server is crashed, at that time we need to stop the execution of jsp file. how can we stop the jsp file???? ... View Questions/Answers
java question May 10, 2010 at 7:09 PM
Given the string "hey how are you today?" how many tokens would you have after breaking up the string using whitespace as a delimiter? ... View Questions/Answers
java question May 10, 2010 at 6:56 PM
given a variable declaration of:final int NUMEMPS=100;(A)what happens to the value of variable NUMEMPS if i decide to assign it a value of 50? ... View Questions/Answers
java question May 10, 2010 at 6:47 PM
given a method header belowpulic void setNew Values(doublelen,doublewid)(a)what would happen if i use the method as setNew Values(2,4)and state reason why? ... View Questions/Answers
writing a code May 10, 2010 at 6:39 PM
write a code for a class called display that will display your name and age onto the screen ... View Questions/Answers
Dialog Frame focus May 10, 2010 at 6:01 PM
Is it possible somehow to focus/activate/select a modeless dialog, when the focus is lost?i.e. In the Modeless dialog I can visit the parent frame also. But on some event(like a button), I want my dialog frame to be focussed/active.(Here i dont want to reopen it. As it is already opened. I wan... View Questions/Answers
java servlet May 10, 2010 at 2:52 PM
Hi Friends, I want to create a small web application using net Beans.My intention is to create a log in page,if the person doesnt have username , password then he use to create a username.For this i want to store these details in a database using MySql,how can i connect the web a... View Questions/Answers
trafic site statistics May 10, 2010 at 1:53 PM
Hello,Does the framework JSF work for creating web pages containing 3D entities (like 3DCharts)and displaying web trafic statistics using data coming from a full database ? if it does, how can we correspond the balises d:chartItem to a java class (Bean)? Thanks for any suggestions.... View Questions/Answers
prime number May 10, 2010 at 12:50 PM
i want to write a code in java that picks prime numbers from an existing list regardless how long the list is, it should then select non prime numbers when prime numbers are finished ... View Questions/Answers
display messages in short May 10, 2010 at 11:22 AM
hi sir, i am getting a full text message and that is shown in the table after sending a email.But now i have to get a starting few words then dots like hi dear..... so tel me how to do in javascripts or in jsp...Thanks and RegardsHarini V. ... View Questions/Answers
How to insert multiple checkboxes into Msaccess database J2EE using prepared statement May 10, 2010 at 10:46 AM
Dear Experts,Tried as I might looking for solutions to resolve my Servlet problem, I still can't resolve it. Hope that you can share with me the solutions.In my html form which has a post, I have the following code:-<td><h3><b>Subjects to be tut... View Questions/Answers
WEB-SERVICES-JAXRPC May 10, 2010 at 10:20 AM
when i excute simple webservice i got this exception:javax.xml.rpc.JAXRPCException plz hel me how to resolve? ... View Questions/Answers
writing programs May 10, 2010 at 4:05 AM
1.write a program to display odd numbers from 1-50.2.And Write a program to find the facorial of a number ... View Questions/Answers
DIV ERROR May 9, 2010 at 11:56 PM
I HAV FACING THE PROBLEM.....when...i m fetching data from my data base using php...if the data hav any space ....at that time the view is very good....but if there hav'nt any space at that time my div tag is strached....when i maintain max width....at that ... View Questions/Answers
Java compile error May 9, 2010 at 11:00 PM
I am having trouble compiling the Java project available on this site at let me know what is the problem?Secondly, do you know any easy method to compile Java on Mac besides using the Terminal? ... View Questions/Answers
JSP:HTML Form in-place Editing May 9, 2010 at 6:03. The code works ... View Questions/Answers
swings May 9, 2010 at 1:12 PM
can a wavelet picturization to the image can be to retrieve a image from the database. please help me ... View Questions/Answers
java May 9, 2010 at 1:07 PM
give the code to design the hotel management software ... View Questions/Answers
java May 9, 2010 at 12:58 PM
What do you mean by an array ? explain with an example ... View Questions/Answers
program May 9, 2010 at 9:02 AM
how to write the following code?s=(a+1/2)+(a+3/4)+(a+5/6)......a+n/m ... View Questions/Answers
prime number May 9, 2010 at 6:47 AM
this project is to determine if a phone number is a prime number. if the number is a prime number then print a message to the effect. if the number is not a prime number then print the prime factors of the number. allow the user to continue entering numbers for as long as he or she wishes. ... View Questions/Answers
compute nth root May 9, 2010 at 6:23 AM(Remember if X is negative Y must be odd.) The user enters values for X and Y and the prog... View Questions/Answers
java beans May 8, 2010 at 7:18 PM
how and where are java beans implemented ... View Questions/Answers
java preface May 8, 2010 at 7:16 PM
what is difference between java programming language and java script? how do JSP, JDBC, Java servlets link? what is core java ... View Questions/Answers
Jsp count and java bean May 8, 2010 at 6:31 PM
Please am on a project and i want to create a countdown timer that will have this format HH:MM:SS. Am working with jsp and java beans please somebody help. ... View Questions/Answers
java May 8, 2010 at 1:27 PM
hi sir ,my questions :1) explain with example polymorphism , abstraction, and inheritance.2)explain with example the difference between hash map and array . ... View Questions/Answers
java files May 8, 2010 at 1:10 PM
Hi!How to create files (not temporary) when i got exception in my java program.I want to write the complete exception in file...Thanks in advance... ... View Questions/Answers
java question May 8, 2010 at 1:05 AM
how would you convert the following values into a string(a)124(b)5.89 ... View Questions/Answers
java May 8, 2010 at 1:01 AM
how would you convert the following values into a string ... View Questions/Answers
java program May 8, 2010 at 12:17 AM
hi sir, i want a simple java program to pick out the biggest in an array of integers ... View Questions/Answers
JAVA connection pooling May 7, 2010 at 9:20 PM
how to do JDBC connection pooling in Apache Tomcat ???? ... View Questions/Answers
java script May 7, 2010 at 7:02 PM
hi sir,the program code that i need for is :1)there are two button in javascripting if you click the Ok button the cancel should disappear . ... View Questions/Answers
writing java programs May 7, 2010 at 6:50 PM
How do i write a code to display even numbers from 1 to 50 ... View Questions/Answers
java code May 7, 2010 at 4:13 PM
hi sir, i want coding for the program :1) write a program : add and display the strings by assingning different attribut to each of them and add in a variable ( say advar):"java" "interview""conducted"and also display "interview complete"using addvar.... View Questions/Answers
java May 7, 2010 at 3:30 PM
can abstract class be final? ... View Questions/Answers
Collections arraylist May 7, 2010 at 2:24 PM
how we can make array list as syncronised....and what will be the code for that? ... View Questions/Answers
code May 7, 2010 at 12:54 PM
hi sir my question is i have created a html ang jsp code for login process.my problem is every time it is accepting a new id and password.how to write the code to accept an existing id and password. ... View Questions/Answers
Servlets May 7, 2010 at 12:28 PM
initially i have one registraion html page.i entered values into that html. now i am reading values into servlet from this html page. These values are going to be inserted into database.now here i want display one thing i.e, If suppose the values insertion in d... View Questions/Answers
swings May 7, 2010 at 11:41 AM
i want to retrieve image using histogram of that image and display the histogram in the screen that is my problem ... View Questions/Answers | http://www.roseindia.net/answers/questions/231 | CC-MAIN-2017-34 | refinedweb | 4,313 | 65.42 |
I need to be able to have one mapped drive mapped to two different servers. After our recent reconfiguration, the data that used to appear on the users S: no resides on two different servers. I still need for it to appear to the users just like all the data resides on one machine.
I have heard of using virutual directories but not sure how to go about it. Or is there a better way
User would have drive S: which would be pointed to a folder on server 1 and one of the sub folders would be data that resides on server 2.
Thank in advance for any suggestions..
9 Replies
Feb 22, 2012 at 2:22 UTC
I'm not so sure it works that way. But maybe look into DFS.
Feb 22, 2012 at 2:29 UTC
I would just put a shortcut to the data on server B on server A and map the drive to server A. To the user it appears to be all in one place.
Feb 22, 2012 at 2:42 UTC
I had a similar situation. My shared folder was getting too large and I had to move some data to another location. But I didn't want to add more mapped drives for the users or, really, for them to know anything changed (like if I made shortcuts instead of the folders that were there). I moved the data and then used "Directory Junction" symlinks to make it appear to the user as the same folders on the same drive, even though the data is physically somewhere else.
I tried doing it from the command line at first, but this is WAY easier.
http:/
Feb 22, 2012 at 2:42 UTC
I don't think that is possible. Just like Colonel Panic says, you could just put a shortcut into Server A that points to the location of the other files on Server B..
Feb 22, 2012 at 2:47 UTC
Use a symbolic link (symlink?)
use the mklink command...
mklink /d “local\directory” “\\server\share”
Learned this using CrashPlan. Similar to what Ed mentioned above. That way there isn't a drive letter assigned per say. It will appear to be a local directory.
Feb 22, 2012 at 3:22 UTC
mklink and symlink work great on my desktop, but corporate hasnt given me enough rights to make them run on the server.
I believe DFS is the way I will have to approach this. Once corporate sets up the namespace in AD.
Thanks!
Feb 22, 2012 at 4:00 UTC
Hi Edith
I looked into DFS and got stuck as it only works etc on certain versions of the Windows Family of Servers. I wanted to use the DFS NameSpace as it is exactly the scenario, however I had a legacy collection of Server 2003 Web Servers and all sorts.
Obviously those in IT allowed and given the budgets to get proper kit shouldn't run into such issues. :op
Feb 22, 2012 at 4:05 UTC
DFS is you solution.
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/201096-users-need-to-access-two-server-from-one-mapped-drive | CC-MAIN-2017-30 | refinedweb | 536 | 80.31 |
Answered by:
[ServiceOperation] --> [Invoke] not working
Hi,
After the upgrade to WCF RIA Services the following no longer works:
[Invoke] [RequiresAuthentication] public UserAuthorization GetPermissionsOrder(tblOrder Order, int AccountID) { return new UserAuthorization(); // TODO }
As per the breaking changes I changed [ServiceOperation] to [Invoke] but am still getting the error:
Operation named does not conform to the required signature. Return types must be one of the predefined serializable types
Thanks
Question
Answers
All replies
Here is a simple Query method which will return a single entity:
public Customer GetCustomerByID(string ID)
Get is as you know a prefix to mark the method as a Query method, you can also add the [Query] attribute if you don't want to use the Get prefix. in other words,
You can simply do the following:
public class MyDomainService : ..........
//Optional [Query]
public Customer GetCustomerById(string id) { .... }
On you Client Side:
_context.Load(_context.GetCustomerByIdQuery(), LoadBehavior.KeepCurrent,
dataLoad => {
...
var customer = dataLoad.Entities.FirstOrDefault();
...
}, null);umerable<Entity>?umeable<Entity>?
No, a Query method can return back a single entity and has been able to do that since the July preview. If there is an issue here it is that there isn't a seperate LoadOperation for a single entity query so you have to use the regular LoadOperation which doesn't have a property for returning a single entity.
The VS2010 Invoke operation can return an entity type but you still shouldn't be using the Invoke to load an entity. The Invoke in VS2010 can now return an entity type to make it easier to get complex returns types from an Invoke but using it to load a single entity is still a misuse of the Invoke., you are right. Now I see it works. I was confused, that for my opeation
public Something GetSoething() {...}
I had previously labeled by [ServiceOperation] so I changed it to [Invoke] and it didn't work. So I changed it to [Query] label and it didnt work too - compile error. Then I removed attribut and it compiled well. But on the client the operation was not available, so I was confused. And the reason is the operation has now suffix "Query". Of course !, now it realy doesn't matter.
Thanks.
This is a step in the right direction. Here I thought it wasn't supported. Very confusing, please update the documentation for clarity. Still I tried a simple example with Customer that has an Addresses association, but when passing a Customer entity to the invoke operation, the Addresses-collection is null on the server end. This is not what I expect. I expect to be able to either mark the associations that need to travel the other way or mark the associations that should be excluded. There are typical scenario's which involves a server roundtrip of a partial object graph to update the graph. For instance if you change a particular entity field, it might require a server operation to massage the entity. You would like the changed object graph to be merged in the client entity set. With this in place WCF RIA is really useful.... woudn't call it a problem... but LoadOperation.Entities.SingleOrDefault doesn't look as good as LoadOperation.Entity... specially when analysing someone else's code one may have to wonder "why is this code ignoring the rest of the collection?"... it's just like using query methods to return entities instead of invoke methods.. both of them can return entity collection (well in this new release they can't anymore for VS2008) BUT query methods make more sense (they were made for this purpose just like entity collections are supposed to return a collection of entities...)
as i see it, its just a matter of concept...
but anyway, this is becoming a little "off subject"..
about the allowed return types in InvokeOperations, i'm glad to know that it works with .NET4 bits and VS2010, because i'll have to create a "query method" that will invoke actions on server, just so i can use my composed return type.
again.. it works.. but it looks ugly
:D
as discussed, since we can't return composite types with Invoke methods using WCF RIA Services (VS2008/.NET3.5), query methods would have to be used instead.... returning that composite type.. i didn't like this solution to begin with because, in my case, it would be used to return compilation errors of a method that will compile code and generate a dll to be used by another software... so as you can see there is nothing to be queried there
anyway it seemed like it would work, so it wasn't a big deal because it's a beta version and all....
but i just couldn't make it work... several kind of errors, lots of restrictions, and it kept looking worst
so i decided not to spend a lot of time on this (again: its a beta version!): i serialized the composite type into xml and passed it to the client with an invoke method returning a string.. i already had a class to handle this kind of serialization, so it was really easy
and it works perfectly!
it's a different approach, so if you're having problems returning composite types, try this! am fighting my way through figuring out how to best use RIA Services and this question jumped out at me. The reason we care is that not every business application fits into the "Customer wants to place an order" use case that seems to be what Microsoft designs their frameworks around.
For instance, I have a Silverlight form that needs a set of user options that are in a complex data type that encapsulates a dozen or so textual entries. I *could* put it in an XML string and return it as one person suggested but what I *want* to do is return an entity object called "Filter" and let the framework do the serialization magic for me. Instead I am going to rewrite things to load up a list of exactly one every time and then get the FirstOrDefault as suggested instead of Value. I could of course ask for a series of strings that represent each option or I could... well the list goes on and on!
There are lots of workaround but they are all just that... workarounds. I am open to the possibility that my thinking about architecture is too limited or perhaps that I am simply using RIA Services wrong, however I also think that Microsoft writes too often for the demo app and not often enough for the real world. Just one guy's opinion though! :)
Bottom line is this, for those of us who used a ServiceOperation to return an entity type before, we now have to go back and change those all to Query operations. It's not just a matter of changing from "LoadOperation.Entity" to "LoadOperation.SingleOrDefault" it's changing from an InvokeOperation.Value to a LoadOperation.FirstOrDefault along with calling Load instead of calling the service method directly. It's not a lot of work... but it's work and that's why I care! :)
Can you give me more detail on what you are trying to do MSwaffer or a link to where you had been discussing this previously? I can understand your issue about having to change things, but that is why I have been telling people not to use the ServiceOperation to load data since the beginning of the year.
Sure thing. We have a reporting application in SL3 that consists of a dashboard with customizable charts along with the ability to set up and run paper based reports. Each report and / or dashboard module can display filtered results based on user preference. The filters consist of the usual hierarchy groupings like region, district, area along with more specific filters like manager code and employee code. There are additional, more domain specific filters as well.
On the server side these filters are encoded into strings that are stored in the DB. One ServiceOperation / Invoke method we were using was GetDefaultUserFilter() which does exactly what it says. Every user has a default filter which is the most expansive filter they are allowed to run and all reports start with that filter, however the user can change it and store a copy of that filter along with that particular report. There will only ever be one default filter and any other time I need to retrieve a filter I know exactly which one I want and can ask for it by ID so we also have GetFilter(int id). I only ever need to get one filter back at a time for this particular app.
I am OK with the cost (and pain!) of being on the 'bleeding edge'... it's just the way life works. What made me raise my eyebrows in this thread was the assumption that returning a single entity would be a rare thing. There are probably another half dozen examples just like this one in our one app and it's still in Beta. :)
Like I said, maybe we need to rethink how we are writing our SL apps but the assumption in code samples seems to always be that we as developers are always binding data grids to lists of entities and editing them in place. I have only been doing .NET for 5 or 6 years so maybe I am not the best judge but in the half dozen or so apps that I have worked on, we have only bound to a data grid twice and we replaced one of those with a better UI not long after.
Thanks for the info in the thread... it was helpful in resolving the issue today!
The assumption isn't that loading a single entity is rare, the assumption is that loading a single entity is a subset of loading multiple entities and LINQ already has the SingleOrDefault method to say we are expecting only a single result back from an operation so the need to load a single entity is already covered by the API. As for your particular issue, why aren't you leaving you transferring the strings as they are encoded in the database to the client and doing the processing there instead of unpacking them on the server?
I have put together the following extension method for the LoadOperation which will give the LoadOperation an Entity property. If you think this is useful I will add it to RIAServicesContrib.
using System;
using System.Net;
using System.Windows.Ria;
using System.Linq;
namespace RiaServicesContrib.Extensions
{
public static class LoadOperationExtensions
{
public static T Entity<T>(this LoadOperation<T> lo) where T : Entity
{
return lo.Entities.FirstOrDefault();
}
public static object Entity(this LoadOperation lo)
{
return lo.Entities.FirstOrDefault();
}
}
}
The way it is now is just fine with me. Whether it's a method that returns one entity or one that returns many, either way you need to provide logic that checks if one entity was returned, just the code looks a little different. Not a big deal. The fact that all this plumbing is already written is saving me so much time it's hard to complain about anything. Transactions, concurrency, security - its' all there and I haven't written a single line of code. This means more time spent on the important things that are custom to my application rather than reinventing the wheel. Some people like to do that but not me.
I suppose the only axe I have to grind is with the ADO.Net Entity Data Models in terms of using stored procedures for selects whose results do not fit an existing table in the database and UDF's.
I'm not 100% sure if i'm using RIA Services the way it is supposed to be used, exactly because all the examples i see on the web are about authenticating users, placing orders and binding data to a grid or something..
I do bind data a lot.. And RIA Services is really great on doing that!
But i also really liked the possibility to invoke ServiceOperations... We're writing a large and complex app suite that uses several technologies, and the SL3 app is our web management tool. Part of this "WebAdmin" tool allow the user to create a script (compiled dll written in c#) that will be used by other apps. The editing of this code uses all the great things provided by RIA Services (query methods, easy data binding, etc). After editing the code, though, it needs to be compiled. So i created a ServiceOperation that does exactly that and returns the compilation messages and generated code, in one classe called "CompilerResult".
As you can see, the most important thing about this method is the operation that generates code, compiles it and builds the assembly, not the CompilerResult entity that it returns. I don't think i'm a programming genious and i can be really wrong about it, but in my understanding "entities" are the goal of Query methods and "operations" are the goal of ServiceOperation methods (now called Invoke methods). So i just couldn't help feeling that turning this ServiceOperation methods into Query methods is wrong.
I also have ServiceOperation methods (now Invoke methods) that sends custom commands to WindowsServices running on the server, but these only returns "bool" types, so there's no problem with them.
Well those were just examples of an "out of the ordinary" use of RIA Services..
By the way: i used the "manual serialization" with my CompilerResult because i don't want to change my methods to Query methods and then change them back to Invoke methods when RTM version of RIA is released. But then i started wondering if this will be a "sure feature" in RTM version. I read that it works with .NET4 bits and VS2010, but will it be still there in RTM?
Thanks!
During the last month there aren't so many design changes more than the communication, but that doesn't mean that changes will take place during the beta stage. If you design your application correctly, and if RIA Services team removes the Service operation method (I don't even think they will do it), you can still replace the Service operation call with a simple WCF Service.. I never build an app based on a beta, if I do, I make use of wrappers, facade and proxies etc. So I can easy replace the infrastructure if needed.. But using building with a beta version, will or can result in a higher cost and time it will take to build the app, because changes can happen, and then we need to change our code to fit the new changes...
Your examples are fine, and I think you have a good understanding of how to use the Invoke correctly and I don't think you need to worry about losing that ability in the RTM. WCF RIA Services is supposed to be feature complete.
My concern isn't you, I think what you are doing is fine,. In my mind, the invoke is simply a way to embed a standard WCF method inside a DomainService so that we don't need to create a seperate, standard WCF service for the situations where we need to do that.The problem is how to get people to understand that distinction.
Thanks Colin!
I'm glad to know i got it right! I'll keep using the serialized object until RTM comes.
But i admit that i was confused.... Now i got your point: retrieving data to bind, submit or reject changes should definitely use query methods.. because that's what they were made for and "loaded entities" have all that special treatment on the client side that makes them easy to use.
About the distinction.. here's how i see: query methods return entities that are trackable, editable, submitable. There's a lot of special handling going on there. Invoke methods return sort of "read-only" values. They give you what you asked for and that's it: no "coming back to the server" on its own.
As for your particular issue, why aren't you leaving you transferring the strings as they are encoded in the database to the client and doing the processing there instead of unpacking them on the server?
Mainly because this breaks our concept of object oriented programming. We feel that rather than pass around a dozen magic strings they should instead be encapsulated inside an object that is then passed around.
my concern is for people trying to load entities with an Invoke, then trying to bind that entity, edit that entity, and then expecting to be able to SubmitChanges for that entity.
This makes sense to me. I see how we were using the framework wrong but I also see where we got the wrong idea. Every example we saw of using the LoadOperation was a case of using a list and binding it to a grid or DataForm. Knowing that this same pattern can (and should!) be used for single entity handling is definitely an "aha" moment.
A side note is that we aren't using ADO.NET Entity Framework. The reasons are way outside the scope of this thread but suffice it to say that "not having to write a single line of code" isn't a compelling argument for an enterprise application with a long support horizon. The ability to change existing functionality quickly and confidently is more important than being able to create the functionality quickly in the first place. Every time we have played with Entity Framework it's been great... until we had to start changing existing code.
Mainly because this breaks our concept of object oriented programming. We feel that rather than pass around a dozen magic strings they should instead be encapsulated inside an object that is then passed around.
Got it, I see what you are doing now. I am originally a database guy myself so the idea of even storing magic strings in the database is anathema to me. I can understand wanting to get it turned back into a real object ASAP. I think I saw somewhere that the VS2010 version of RIA Services either has or will be getting support for collections, that might be what you are looking for.
As for Entity Framework, I am using it purely as a DAL between the DomainService. I think EF 4 is going to be a big improvement but I can really understand people not using EF1 it if they are doing any complex server side processing.
Got it, I see what you are doing now. I am originally a database guy myself so the idea of even storing magic strings in the database is anathema to me.
:) I hear you... except we needed a way to filter data in a multitude of ways. So a user might want region A, B and D or maybe just A or maybe all regions that start with the letter A or are between B and E. Multiply that by another half dozen filters and throw in a requirement that ALL the filtering be done in the database for direct reporting.... and you end up with magic strings. I am sure there is a solution that is both elegant AND performant but this is what we came up with instead! :)
Thanks for taking the time to give the input on this and other threads... your comments are quite helpful..
Pulling data from the server via queries makes perfect sense. Submitting changes to the server via a submit operation makes sense as well. What I miss is a way to transparantly do some work between showing fetched data and submitting changed data. So the natural solution would be to use the Invoke-operation, pass in the entities you need and have 'm merged transparantly on return. The argument of not supporting this scenario was:
- the team couldn't guarantee the client side entity state when the server would massage the entities
- performance might be an issue
I don't see why these points wouldn't be an issue in any other WCF RIA scenario... When I check this forum I see several people used workarounds to achieve this goal anyway. So why not support it out-of-the-box with a specific method call?
@theo67:
Wouldn't it be much better to just get the data you need, do the changes and then do the SubmitChanges? Why load entities, made specific changes to the entities, pass them down to a service operation, when you can do the changes on the client-side, and just call the SubmitChanges, or use Custom methods? The Service Operations is a way to let you do whatever you want, but not with entities that you have already loaded..if you do, make sure to Detach the entity from the DomainContext first to not add work to the UoW and disable the Dirty tracking etc.
Why load entities, made specific changes to the entities, pass them down to a service operation, when you can do the changes on the client-side, and just call the SubmitChanges, or use Custom methods?
The scenario (or better said: scenarios) is like this:
- user starts screen and enters some data
- entities are pulled from the server
- user changes some field, now we have dirty entities
- user clicks something that requires a server side operation (business logic) to be invoked against the client side entities (context). Imagine you have a "distribute" button that distributes an amount over several entities and this distribution rule is (complex) server side logic (requiring database lookups).
- entities are send to the server and manipulation operation is invoked. The client syncs its local state
- UI is refreshed and operation is done
- user enters some more data
- user presses save, now I want to submit changes in a unit of work
A concrete example:
- user fetches an existing sales order
- user adds an order line
- user presses a recalculate button to recalculate the order discounts, shipment cost, etc
- user looks at the changed order and submits the order when he's satisfied
This "recalculate" is an optional operation based on the "richness of the application" ;-)
So this middle piece can be anything you can imagine while looking/editing data (entities). The problem with the Custom-operation is it requires SubmitChanges as well and at that stage you're not willing to persist the changes.
Does this make sense?
@Fredrik,
I agree, most of the time, its much better to "just get the data you need, do the changes, and then do the SubmitChanges".
However! I also agree with Theo, sometimes the "do the changes" part is significantly more work then some updates from the UI by the user. That doing this work makes more sense to do it on the server, rather then the client.
In my case its rating an insurance policy. Specially quoting a policy without requiring the policy to be persisted to the DB.
The insurance policy itself is an object graph composed of a Policy object, one or more Coverage objects, zero or more Endorsement objects, plus a plethora of support objects (Person, Address, etc...)
"Rate" is a behavoir that logically makes more sense to do on the server, as it involves many & numerous DB lookups and in some cases some interesting calculations. Putting Rating on the server helps protects your IP (intellectual property), plus it avoids an extremely "chatty" conversation between the SL client and app server. (granted in either case there is still a lot of chatter between the DB & app server).
My first thought when creating my Rate method was to pass a complete policy graph in as a parameter, and return a complete policy graph as a return value (based on normal WCF protocols). RIA July 2009 baulked at this. Yes a custom method helps, however custom methods imply you are going to persist the object graph; In the application I recently finished one of the requirements was quoting a policy without actually persisting the quote. We only persist issued policies.
I wound up defining a Rate method that accepted an XElement and returns an XElement, where I serialize the policy to/from XML for the rating engine.
Jay
I have always found your points interesting Theo, and I think you have a really important use case. My feeling has always been that this is a feature that doesn't fit into the V1 release of WCF RIA Services. If I was designing this featue (and I am not) one possibility would be to wait for the WPF version of RIA Services to be done first. The reason is that the WPF version of RIA Services would (theoretically) give me a DomainContext and RIA Services type entities within the Desktop CLR. So, that would make it possible for me to serialize the entire EntityContainer, ship it to the server, do whatever processing I want to do, and then send any modified Entities back to the client all without actually converting the entities back to the DAL types.
Is that a good design? I have no idea, I just made it up. I am sure Nikhil will come up with a much better concept when they get to it. The point is, they will need to time to think about it, design different prototypes, try them out, and then come up with a sound technical design. It our role in this process to come up with well thought out use cases that show:
- That a need exists
- That Microsoft is the only and/or best one who to fulfill that need
- That the problem is "ripe"
It is 2 and 3 that are the most tricky. For example, I have seen many requests for Microsoft to add some kind of official MVVM support to Silverlight. That violates both two and three. Two is violated because by its very nature there is nothing about MVVM that requires it to be embedded in thte framework. Three is violated because the MVVM is still evolving and there is very little agreement on a standard way of writing them. Heck, there is no agreement that there should be a standard way of writing them. In many, many ways asking Microsoft to implement a feature is like asking the Supreme Court of the United States to decide a case. First the district courts (i.e. the programmer in the trenches), then the appeals court (i.e. MVPs, forums, people creating Codeplex projects, Nikhil with Silverlight.FX, etc) have their try and then, finally, Microsoft will take a look at it.
OK, that was a little long winded of a post. The point is, keep pushing it Theo, people are listening, and please don't get discouraged because you don't see instant results.
I can see your point now. Your scenario can be solved with RIA Services today, or WCF. As with several frameworks, there can be about 5-20% when a framework can't solve a problem, in this case some others solution needs to be used.
Think of the DomainContext as the ObjectContext in Entity Framework or the DataContext in Linq to SQL, it loads the data based on a query and return it to us, it will also cache the data and have a unity of work etc. We can add, do changes and then submit the changes. The same with WCF RIA Services, so for specific cases as for the "Recalculate", you can just call a service operation (InvokeAtribute, not a custom operation) get the new results needed, update the sales order already fetched with the new data.. then the user can hit the save button to submit the changes.
I think most of the scenarios can be solved, but there may be some special cases when it maybe can't (need to find that one first ;)).
@Fredrik & @Colin
I think in most cases the question isn't "can the problem be solved?" but "how should the problem be solved?" I think Colin made a great point about whether or not something should be included in the framework. Just because my life would be made easier if the framework did something for me doesn't mean it would impact many others.
My main concern here was that I wasn't using RIA properly. Turns out I had the wrong idea about what problem RIA Services was solving. My first thought was "cool... a fully configured WCF service for free!" Using the [ServiceOperation] let me just bypass all the RIA goodies and do my own thing whenever I wanted to. Loading the context and all that was just useful stuff that came along for the ride.
Turning that viewpoint on it's head makes me do two things. First, it makes me want to structure my SL app to take advantage of the RIA Service data model since it all "just works." Second it forces me to open my thinking to other possibilities for solving problems outside of RIA.
RIA solves a very specific problem and it does so quite well. For other problems I can turn to other solutions.
The same with WCF RIA Services, so for specific cases as for the "Recalculate", you can just call a service operation (InvokeAtribute, not a custom operation) get the new results needed
Great feedback guys.... and I agree we shouldn't ask for more than appropriate.
One of the problems I have with the current Invoke-operation: when you pass in an Order entity I expect the associated Orderlines and change tracking information to be marshalled as well, just as transparent as when doing a submit changes. Today the associated Orderlines nor the change tracking information will be marshalled. I have tested it. So when I pass in an Order-entity, why does RIA decide to exclude the associated Orderlines? This is part of my contract, I never told the framework to exclude associations, so that doesn't make sense. And REST is REpresentational State Transfer after all with the emphasis on "state". Basically I would like to see support for "SubmitChangesAndInvokeMyOperation". Today "SubmitChanges" is bound to validate/save/custom operation and should be called "SubmitChangesValidateAndPersist". It shouldn't be too hard to map it to something else on the server, would it?
So WCF RIA has this great change tracking mechanism, transport protocol and flexible server side storage mapping, but you're left on your own as soon as you want to send client side state to the server for doing something else than persisting the data. I mean there is much more in life than saving data. Wouldn't it make sense having support for these additional operations as well? The whole idea of an AppServer is to encapsulate business logic so it can be used by different types of "thin" clients. You can replicate business logic on the client if you like, but the only truth lives on the server.
Imagine Entity Java Beans where the "entities" don't leave the AppServer. All state is on the server and you have this chatty interface. In this context it makes perfect sense to poke methods on entity graphs. Lessons has been learned from that, so we changed the AppServer from statefull to stateless. This means all state will be on the client. So when you want to invoke business logic in this environment, you start by passing in the context (state). This can be a single parameter, but in forms-based scenarios it means passing in the entity context. WCF RIA has decided to exchange the full entity context and invoke a single operation instead of doing individual calls to create/insert/delete methods with simple values types from the client.
Basically I'm asking for a way to pass the entity context to the server and have a real "custom operation", which can be called without implicit persistence of the entities. I expect a proper entity graph on the server as well. I don't see why this doesn't fit in the current WCR RIA stack.... the only difference with a custom operation is you're not persisting data, but you invoke some other operation. Why is the save so special? The actual persistence implementation is out of the WCF RIA scope anyway, because you can do anything you like during save when you override the domain service. I assume the "special" aspect of save is the implicite validation-step. But even validation has to be used in the context of the operation: validation for deleting entities would trigger a different ruleset than for updating an entity.
We're one of these companies that can't exploit the Entity Framework and get all the persistence for "free", because we have an existing database schema and we have to support the "other" database provider as well. So life is already tougher for us ;-)
I hope I didn't hijack this thread and change the direction of the conversation. If so, I'm sorry.. Why should you pass down a whole entity if you only want it to be marked "ready to deliver". In this case to increase performance and to not send to much data in a enterprise RIA, we simply send the Order ID and the status, nothing more, no reason to send the whole object. BUT! I like the support of loading objects, it will make it easy for us to gather the data we need in a nice structure.
Here is an old blog post:
Note: The blog post is based on a really early preview of the RIA Services, the new version have improved stuffs a lot, but skip the RIA thing, it's the story about distributing objects in general that may be interesting..
DCOM is the same as EJB: the entities live on the server and you have a chatty interface plus you're bound to a specific server machine. The solution to that was the introduction of the Data Transfer Object: free the server and move the state around. Consequence: mismatch between the rich internal domain model inside the AppServer and "behaviorless data objects" traveling to the client. So we build this nice abstraction on top of our database schema, we get a nice domain model, we tear it down and send data back to the client. Somehow we have to reconnect this client side world to the server side world again. RPC was not the answer, message based request/response is. So now we have entered the message based world... and the goal of WCF RIA Services is to make it easy: have a look at the terminology used by Brad Abrams when it was called .NET RIA Services.
Why should you pass down a whole entity if you only want it to be marked "ready to deliver".
Well, you're right, if all you wanted to do is to change an entity property, do it on the client. But an application is all about data transformations using rules. You fetch something from the database and once it has been loaded as an entity it becomes meaningful. So changing a flag from "1" to "0" is not the problem here, it's the rule that determines how this flag changes from "1" to "0". Now we're in a scary area, since why would you trust any changes done on the client at all? I mean I can fetch the order from the server, change some amounts, why would the server trust this changed data? But this is a side step...
In this case to increase performance and to not send to much data in a enterprise RIA, we simply send the Order ID and the status, nothing more, no reason to send the whole object.
You're absolutely right when the decision/rule works against the persisted state which can be fetched from the store by the service. But what if you need to see the changed state that lives on the client to make the proper decission. Imagine you get a discount of 10 percent if you order 50 pieces and 20 percent for 100+ pieces. So I have an orderline in the database for 50 pieces and on my client I added another orderline for an additional 1000 pieces. I definitly need to pass the changed orderline quantity if I want a correct discount percentage (1050 pieces). But how can the client be sure it passes enough context? Basically the service should expose a method to accept a complete order to recalculate the price. Now it gets interesting: there are two variations of the "calculate price":
- one is basically a proposal during data entry
- the other one is an overruling piece of logic when persisting the order.
Exactly. So if the goal of WCF RIA Services is to make distributed programming easier, let's do so. But we should always be aware what we send to the middle tier, the same as we should be aware what we pull from the database. Over the years I have used several technologies and this discussion reminds me of the Progress 4GL environment with it's AppServer and TempTable support. That 4GL environment used .NET DataSet like parameters to talk to their AppServer. Generalizing WCF RIA Services feels like exchanging DataSets compared to more strict contracts using plain vanilla WCF. Now I'm definitly off track ;-)
"But how can the client be sure it passes enough context?"
But is it the really the client that should have the responsibility to know that, just a thought..
Here is how I should have solved the OrderLine problem:
Load the order by using a Query operation
add the orderline by using the .shared code feature.. The .shared code will have a AddOrderLine method "added" to the Order entity. This method will or may have information to perform the discount based on the Order's state. If not, and we need that information from the sever side before we do a submit changes, we can use a service operation, only pass the information needed to do the discount, return the new discount result.
Update the Order with the new discount information.
Do something more with the order if needed
SubmitChanges
Conflict handling if there was any conflics
Re Load Order if needed, but at this moment the Order on the client should have the same info as the order on the server.
"and the goal of WCF RIA Services is to make it easy"
Yes, but it's not a Silver bullet, it can't and will not solve every thing. It will not be suited for every app, and they don't say it's the Silver bullet or your have to use it to build RIA. It's the developers and the architects role to know when it can be used or not to be used, as with every framework and technologies. The Silver bullet will never exists.. but different ways to solve problems will exists.If But if you have find a way to solve something much easier, I know the RIA team is more than happy to hear about your suggestions and feedback.
We are passing an entity inside the function RecordLinkClaim as a parameter and loading data in DB and getting a string value. All this was done in a ServiceOperation.It was working fine. But now we have upgraded RIA service to WCF RIA Service so now as we are changing [ServiceOperation] -> [Invoke] and we are getting an error given below.
Error 63 Operation named 'RecordLinkClaim' does not conform to the required signature. Parameter types must be an entity type or one of the predefined serializable types. Axis.Web.WPF.ReClaims.
PLEASE HELP!!!!!!!!! any help will be highly appreciated
Similar problem here. I was using ServiceOperation to do simple operations like "void GenerateTemplate(Guid key)". I guess now I have to use ordinary web services for all those calls?
At the beginning I really liked RIA services, but now I slowly begin to dislike it...
Hmm, I read through this blog and still don't understand why my current impl is not working?
Anyone can help?? | http://social.msdn.microsoft.com/Forums/silverlight/en-US/cdf1b674-477d-4ade-854e-fc314e16e00b/serviceoperation-invoke-not-working?forum=silverlightwcf | CC-MAIN-2014-35 | refinedweb | 6,726 | 61.06 |
Feb 21, 2017 11:02 AM|Lexi85|LINK
Hello All...
One strange thing..this code works perfectly fine on my localhost..but when i put it in hosting server...i get this error
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Not IsPostBack Then Dim param As String = Request.QueryString("empid") Dim dt2 As New DataTable("UserTable") Dim constr1 As String = ConfigurationManager.ConnectionStrings("constr1").ConnectionString Dim con1 As New SqlConnection(constr1) Dim cmd1 As New SqlCommand() cmd1.CommandType = CommandType.StoredProcedure cmd1.CommandText = "PersonsDetail" cmd1.Parameters.AddWithValue("@UID", param) cmd1.Connection = con1 con1.Open() Using sda As New SqlDataAdapter() cmd1.Connection = con1 sda.SelectCommand = cmd1 sda.Fill(dt2) End Using For Each row1 As DataRow In dt2.Rows Label1.Text = Convert.ToString(row1("firstName")) Next End If End Sub
Server Error in '/' Application. Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: BC30451: 'CommandType' is not declared. It may be inaccessible due to its protection level. Source Error: Line 21: Dim cmd1 As New SqlCommand() Line 22: Line 23: cmd1.CommandType = CommandType.StoredProcedure Line 24:
All-Star
15176 Points
Feb 21, 2017 11:09 AM|raju dasa|LINK
Hi,
Lexi85'CommandType' is not declared.
Try importing system.data namespace in your code at the top.
Imports System.Data
Star
8560 Points
Feb 22, 2017 08:45 AM|Cathy Zou|LINK
Hi Lexi85.
From your error message. I suggest it is related to the difference of database user’s authority Or the Filed have been changed after you put your database to hosting server.
So, I suggest you could check the Following two points:
1. The database user’s authority in hosting server
2. Compare your database field in local with the database filed in hosting server.
The same error be solved in the following links:
Best Regards
Cathy
Participant
1516 Points
Feb 22, 2017 09:46 AM|Khuram.Shahzad|LINK
There may be some issue with the field length of you database table column, if you have any decimal values then you need to increase precision size.
4 replies
Last post Feb 22, 2017 09:46 AM by Khuram.Shahzad | https://forums.asp.net/t/2116100.aspx?can+someone+help+with+this+error+plz | CC-MAIN-2018-26 | refinedweb | 381 | 53.27 |
On Thu, 2011-12-01 at 10:50 -0800, Nathan Kinder wrote: > On 12/01/2011 06:27 AM, Simo Sorce wrote: > > On Thu, 2011-12-01 at 09:00 -0500, Jiri Kuncar wrote: > >> I've added an attribute "idnsAllowSyncPTR" to "idnsZone" to enable or > >> disable synchronization of PTR records. However the bind-dyndb-ldap > >> plugin option "sync_ptr" has to be included in /etc/named.conf to run > >> synchronization feature. > > We need an update script to run on ipa server at upgrade time then. > > > >> My quick fix of LDAP schema in /usr/share/ipa/60basev2.ldif: > > The DNS schema objects are in 60ipadns.ldif > > > >> ----- > >> attributeTypes: (2.16.840.1.113730.3.8.5.11 NAME 'idnsAllowSyncPTR' > >> DESC 'permit synchronization of PTR records' EQUALITY booleanMatch > >> SYNTAX 1.3.6.1.4.1.1466.115.121.1.7 SINGLE-VALUE X-ORIGIN 'IPA v2' ) > > NACK. > > 5.11 is reserved by idnsAllowQuery and 5.12 by idnsAllowTransfer. The > > first available OID is 5.13
> Do you have a page for tracking OID allocation within the FreeIPA > namespace? If so, we should be sure to consult it to choose the next > available OID and to update it once we have the final patch for this issue. We have one place within Red Hat where we also keep track of all 389ds OIDs that's how I know there is a conflict here. Simo. -- Simo Sorce * Red Hat, Inc * New York _______________________________________________ Freeipa-devel mailing list Freeipa-devel@redhat.com | https://www.mail-archive.com/freeipa-devel@redhat.com/msg09251.html | CC-MAIN-2018-47 | refinedweb | 245 | 75.81 |
LSTM recurrent layer. More...
#include "all_layers.hpp"
LSTM recurrent layer.
Creates instance of LSTM layer
Returns index of input blob into the input array.
Each layer input and output can be labeled to easily identify them using "%<layer_name%>[.output_name]" notation. This method maps label of input blob to its index into input vector.
Reimplemented from cv::dnn::Layer.
Returns index of output blob in output array.
Reimplemented from cv::dnn::Layer.
Specifies shape of output blob which will be [[
T],
N] +
outTailShape.
If this parameter is empty or unset then
outTailShape = [
Wh.size(0)] will be used, where
Wh is parameter from setWeights().
If this flag is set to true then layer will produce \( c_t \) as second output.
use_timestamp_dimin LayerParams.
Shape of the second output is the same as first output.
Specifies either interpret first dimension of input blob as timestamp dimenion either as sample.
produce_cell_outputin LayerParams.
If flag is set to true then shape of input blob will be interpreted as [
T,
N,
[data dims]] where
T specifies number of timestamps,
N is number of independent streams. In this case each forward() call will iterate through
T timestamps and update layer's state
T times.
If flag is set to false then shape of input blob will be interpreted as [
N,
[data dims]]. In this case each forward() call will make one iteration and produce one timestamp with shape [
N,
[out dims]].
Set trained weights for LSTM layer.
LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights.
Let \(x_t\) be current input, \(h_t\) be current output, \(c_t\) be current state. Than current output and current cell state is computed as follows:
\begin{eqnarray*} h_t &= o_t \odot tanh(c_t), \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t, \\ \end{eqnarray*}
where \(\odot\) is per-element multiply operation and \(i_t, f_t, o_t, g_t\) is internal gates that are computed using learned wights.
Gates are computed as follows:
\begin{eqnarray*} i_t &= sigmoid&(W_{xi} x_t + W_{hi} h_{t-1} + b_i), \\ f_t &= sigmoid&(W_{xf} x_t + W_{hf} h_{t-1} + b_f), \\ o_t &= sigmoid&(W_{xo} x_t + W_{ho} h_{t-1} + b_o), \\ g_t &= tanh &(W_{xg} x_t + W_{hg} h_{t-1} + b_g), \\ \end{eqnarray*}
where \(W_{x?}\), \(W_{h?}\) and \(b_{?}\) are learned weights represented as matrices: \(W_{x?} \in R^{N_h \times N_x}\), \(W_{h?} \in R^{N_h \times N_h}\), \(b_? \in R^{N_h}\).
For simplicity and performance purposes we use \( W_x = [W_{xi}; W_{xf}; W_{xo}, W_{xg}] \) (i.e. \(W_x\) is vertical concatenation of \( W_{x?} \)), \( W_x \in R^{4N_h \times N_x} \). The same for \( W_h = [W_{hi}; W_{hf}; W_{ho}, W_{hg}], W_h \in R^{4N_h \times N_h} \) and for \( b = [b_i; b_f, b_o, b_g]\), \(b \in R^{4N_h} \). | https://docs.opencv.org/4.0.1/db/d3e/classcv_1_1dnn_1_1LSTMLayer.html | CC-MAIN-2019-43 | refinedweb | 464 | 66.13 |
On Wed, 19 Aug 2015 23:25:17 +0200 Gwenole Beauchesne <gb.devel at gmail.com> wrote: > Hi, > > 2015-08-19 21:57 GMT+02:00 wm4 <nfxjfg at googlemail.com>: > > On Wed, 19 Aug 2015 19:32:27 +0200 > > Gwenole Beauchesne <gb.devel at gmail.com> wrote: > > > >> 2015-08-19 19:19 GMT+02:00 Gwenole Beauchesne <gb.devel at gmail.com>: > >> > Hi, > >> > > >> > 2015-08-19 18:50 GMT+02:00 wm4 <nfxjfg at googlemail.com>: > >> >> On Wed, 19 Aug 2015 18:01:36 +0200 > >> >> Gwenole Beauchesne <gb.devel at gmail.com> wrote: > >> >> > >> >>> Add av_vaapi_set_pipeline_params() interface to initialize the internal > >> >>> FFVAContext with a valid VA display, and optional parameters including > >> >>> a user-supplied VA context id. > >> >>> > >> >>> This is preparatory work for delegating VA pipeline (decoder) creation > >> >>> to libavcodec. Meanwhile, if this new interface is used, the user shall > >> >>> provide the VA context id and a valid AVCodecContext.get_buffer2() hook. > >> >>> Otherwise, the older vaapi_context approach is still supported. > >> >>> > >> >>> This is an API change. The whole vaapi_context structure is no longer > >> >>> needed, and thus deprecated. > >> >>> > >> >>> Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne at intel.com> > >> >>> --- > >> >>> doc/APIchanges | 6 ++-- > >> >>> libavcodec/vaapi.c | 87 ++++++++++++++++++++++++++++++++++++++++----- > >> >>> libavcodec/vaapi.h | 49 +++++++++++++++++++++++-- > >> >>> libavcodec/vaapi_internal.h | 4 +++ > >> >>> 4 files changed, 134 insertions(+), 12 deletions(-) > >> >>> > >> >>> diff --git a/doc/APIchanges b/doc/APIchanges > >> >>> index aa92b69..fcdaa26 100644 > >> >>> --- a/doc/APIchanges > >> >>> +++ b/doc/APIchanges > >> >>> @@ -16,8 +16,10 @@ libavutil: 2014-08-09 > >> >>> API changes, most recent first: > >> >>> > >> >>> 2015-xx-xx - lavc 56.58.100 - vaapi.h > >> >>> - Deprecate old VA-API context (vaapi_context) fields that were only > >> >>> - set and used by libavcodec. They are all managed internally now. > >> >>> + Add new API to configure the hwaccel/vaapi pipeline, currently > >> >>> + useful for decoders: av_vaapi_set_pipeline_params() > >> >>> + Deprecate old VA-API context (vaapi_context) structure, which is no > >> >>> + longer used for initializing a VA-API decode pipeline. > >> >>> > >> >>> 2015-xx-xx - lavu 54.31.100 - pixfmt.h > >> >>> Add a unique pixel format for VA-API (AV_PIX_FMT_VAAPI) that > >> >>> diff --git a/libavcodec/vaapi.c b/libavcodec/vaapi.c > >> >>> index c081bb5..afddbb3 100644 > >> >>> --- a/libavcodec/vaapi.c > >> >>> +++ b/libavcodec/vaapi.c > >> >>> @@ -41,24 +41,95 @@ static void destroy_buffers(VADisplay display, VABufferID *buffers, unsigned int > >> >>> } > >> >>> } > >> >>> > >> >>> +/** @name VA-API pipeline parameters (internal) */ > >> >>> +/**@{*/ > >> >>> +/** Pipeline configuration flags (AV_HWACCEL_FLAG_*|AV_VAAPI_PIPELINE_FLAG_*) */ > >> >>> +#define AV_VAAPI_PIPELINE_PARAM_FLAGS "flags" > >> >>> +/** User-supplied VA display handle */ > >> >>> +#define AV_VAAPI_PIPELINE_PARAM_DISPLAY "display" > >> >>> +/**@}*/ > >> >>> + > >> >>> +#define OFFSET(x) offsetof(FFVAContext, x) > >> >>> +static const AVOption FFVAContextOptions[] = { > >> >>> + { AV_VAAPI_PIPELINE_PARAM_FLAGS, "flags", OFFSET(flags), > >> >>> + AV_OPT_TYPE_INT, { .i64 = 0 }, 0, UINT32_MAX }, > >> >>> + { AV_VAAPI_PIPELINE_PARAM_DISPLAY, "VA display", OFFSET(user_display), > >> >>> + AV_OPT_TYPE_INT64, { .i64 = 0 }, 0, UINTPTR_MAX }, > >> >>> + { AV_VAAPI_PIPELINE_PARAM_CONTEXT, "VA context id", OFFSET(user_context_id), > >> >>> + AV_OPT_TYPE_INT, { .i64 = VA_INVALID_ID }, 0, UINT32_MAX }, > >> >>> + { NULL, } > >> >>> +}; > >> >>> +#undef OFFSET > >> >>> + > >> >>> +static const AVClass FFVAContextClass = { > >> >>> + . >> >>> +#include "avcodec.h" > >> >>> > >> >>> /** > >> >>> * @defgroup lavc_codec_hwaccel_vaapi VA API Decoding > >> >>> @@ -48,7 +50,11 @@ > >> >>> * during initialization or through each AVCodecContext.get_buffer() > >> >>> * function call. In any case, they must be valid prior to calling > >> >>> * decoding functions. > >> >>> + * > >> >>> + * This structure is deprecated. Please refer to pipeline parameters > >> >>> + * and associated accessors, e.g. av_vaapi_set_pipeline_params(). > >> >>> */ > >> >>> +#if FF_API_VAAPI_CONTEXT > >> >>> struct vaapi_context { > >> >>> /** > >> >>> * Window system dependent data > >> >>> @@ -56,6 +62,7 @@ struct vaapi_context { > >> >>> * - encoding: unused > >> >>> * - decoding: Set by user > >> >>> */ > >> >>> + attribute_deprecated > >> >>> void *display; > >> >>> > >> >>> /** > >> >>> @@ -64,6 +71,7 @@ struct vaapi_context { > >> >>> * - encoding: unused > >> >>> * - decoding: Set by user > >> >>> */ > >> >>> + attribute_deprecated > >> >>> uint32_t config_id; > >> >>> > >> >>> /** > >> >>> @@ -72,9 +80,9 @@ struct vaapi_context { > >> >>> * - encoding: unused > >> >>> * - decoding: Set by user > >> >>> */ > >> >>> + attribute_deprecated > >> >>> uint32_t context_id; > >> >>> > >> >>> -#if FF_API_VAAPI_CONTEXT > >> >>> /** > >> >>> * VAPictureParameterBuffer ID > >> >>> * > >> >>> @@ -181,8 +189,45 @@ struct vaapi_context { > >> >>> */ > >> >>> attribute_deprecated > >> >>> uint32_t slice_data_size; > >> >>> -#endif > >> >>> }; > >> >>> +#endif > >> >>> + > >> >>> +/** @name VA-API pipeline parameters */ > >> >>> +/**@{*/ > >> >>> +/** > >> >>> + * VA context id (uint32_t) [default: VA_INVALID_ID] > >> >>> + * > >> >>> + * This defines the VA context id to use for decoding. If set, then > >> >>> + * the user allocates and owns the handle, and shall supply VA surfaces > >> >>> + * through an appropriate hook to AVCodecContext.get_buffer2(). > >> >>> + */ > >> >>> +#define AV_VAAPI_PIPELINE_PARAM_CONTEXT "context" > >> >> > >> >> I don't understand this. Wouldn't it also be possible to use a > >> >> user-defined get_buffer2 function, but with a context created by > >> >> libavcodec? > >> > > >> > Historically, you cannot create a decode pipeline without supplying a > >> > fixed number of VA surfaces to vaCreateContext(). You cannot have any > >> > guarantee either that re-creating a VA context with an increased > >> > number of VA surfaces would still get things working. I have an > >> > example of a pretty old and semi-opensource driver that stuffs various > >> > local states into the VA context even if they are VA surface specific. > > > > I see... I guess we still want to support these. Does the vaapi API > > provide a way to know whether this is required? Sorry, I completely forgot to reply to this... > No, since it's specified to work that way. :) > > However, I can tell you that: AFAIK, all VA drivers will work as > described (vaCreateContext() + fixed number of VA surfaces required > for decode) except the open source Intel HD Graphics VA driver. > Though, iirc even with it I had to store extra data in the VA context > but only for H.264 MVC decoding on anything older than Haswell. OK. The vdpau wrapper driver doesn't need it either. > >> > If we wanted to call get_buffer2() multiple times to get an initial > >> > set of VA surfaces that you could use to create a VA context, (i) this > >> > would be ugly IMHO, and (ii) there is no guarantee either that the > >> > user get_buffer2() impl wouldn't hang/wait until his pool of free VA > >> > surfaces fills in again. > > > > I don't think it's ugly... seems like a nice (and optional) hack to > > support some older hardware without complicating the normal API too > > much. > > > >> For that very particular situation, I believe we could introduce an > >> AV_GET_BUFFER_FLAG_NOWAIT flag. But yet, we have no guarantee either > >> that the user would actually support this flag if needed. :) > > > > Sounds like a good solution. These surfaces are special as in they need > > to stick around until end of decoding too. > > > > Personally I don't see much value in the new API without having > > libavcodec handle decoder creation (including profile handling). > > > > This could be skewed by my personal experience; most of my own > > hwaccel-specific vaapi code is for decoder creation and profile > > handling. I'd like to do my own surface allocation handling, because > > vaapi doesn't seem to provide a way to retrieve certain surface > > parameters like chroma format and actual surface size. Other API > > users will have similar issues. But maybe I'm somehow misled. > > This is exactly what I would like to know, based on experience: why > would you want to go the pain with (i) determine & map surface > parameters to use, (ii) allocate & maintain your own pool of VA > surfaces, (iii) determine & map decoder profile to use if libavcodec > can do all of that for free for you? > > As I mentioned, I can concede to: > 1. Expose an av_vaapi_get_surface_params() function that gives you the > right surface chroma format and size params to use ; (Assuming this suggestion is for use with the proposed libavcodec vaapi surface allocator.) This would have to work on an AVFrame, because everything else would cause too much of a mess due to the disconnect between decoder reinitialization and return of decoded frames. (The same reason API users are encouraged to use AVFrame fields instead of AVCodecContext fields.) From what I know, there's no way to retrieve this information from an opaque vaapi surface, though. What do you think of adding an additional HW surface info struct to each AVFrame? For example, AVFrame.data[2] could point to an allocated struct, which has fields describing the HW-API specific properties of the surface in AVFrame.data[3]. > 2. Expose an av_vaapi_get_decoder_params() function that gives you the > right profile to use based on the actual underlying HW capabilities. > > With that, this can already reduce quite a bit of code (and possible > errors) in user code and (1) is just a matter of calling > vaCreateSurfaces(), and (2) is just a matter of calling > vaCreateContext(). Once you have that, this is not too far from > calling av_vaapi_set_pipeline_params(). There is still some code to > write for (ii) in user application if the user really wants such > cases. Sounds possible. > That's why, I really would like to know why you would want to do it > manually. This is technically possible, but I would like to understand > the real benefits over just having lavc to manage that kind of things > itself. I think that for the "advanced" user like you that really > really wants to allocate your own VA surfaces (but why? again?), you > can still use the old-school way, i.e. supply the VA context yourself > then, but with the new API (hwaccel_context-free). Well, it's not like I want to have all that extra code for managing surfaces in user application code (including my own). So I'd like very much to be convinced that it's unnecessary. I don't know the inner working of vaapi well enough to judge how important it is to be able to override surface creation completely. But I know the API was extended relatively recently to pass a list of surface attributes for vaCreateSurfaces(). There's an extendable list of "surface attributes" (VASurfaceAttribType), for which at least VASurfaceAttribMemoryType, VASurfaceAttribUsageHint, and VASurfaceAttribExternalBufferDescriptor matter for surface creation. (The latter sounds pretty scary - everything down to the pixel data pointer can be configured.) Can these things really served by the API? (Not like I plan to use them if I don't have to. But the API shouldn't be a "toy" API either, that is useful for the simplest cases only, and requires going all manual if you want a little bit more control.) I don't insist on any of this - but in my opinion this should be thought through. Being able to retrieve basic information like underlying hardware pixel format is essential, though. > Having said that, even if I do (1) and (2), they will live in a > separate/subsequent patch as I wanted to move towards this "lavc > managed HW resources" path gradually. i.e. I don't want to change the > comment as this exactly matches what lavc would do/user has to do at > this very precise point in the series. > > >> And, I prefer to restrict to clearly defined usages that work, rather > >> than allowing multiple combos that would complexify test coverage... I > >> believe the ones I exposed in the cover letter are the only practical > >> ones to ever be useful, based on a compiled set of feedbacks over past > >> years. But I can be wrong. If so, please tell me why would you > >> particularly want to have lavc allocate the VA context and you > >> providing your own VA surfaces. If creating a VA context is a problem > >> for the user, I certainly can expose a utility function that provides > >> the necessary decode params. Actually, it was in the plan too: > >> av_vaapi_get_decode_params(). > >> > >> > Because of that, I believe it's better to restrict the usages to what > >> > it is strictly permitted, for historical reasons. So, either user > >> > creates VA surfaces + VA context, or he lets lavc do so but for both > >> > VA surfaces & VA context as well. Besides, as I further distilled in > >> > one of the patches, lavc naturally works through allocating buffers > >> > itself for SW decoders, that can work, and this is something desired > >> > to achieve with hwaccel too. At least for vaapi. i.e. you'd only need > >> > to provide a VADisplay handle. > > > > I agree, but there are some minor details that perhaps need to be taken > > care of. (See what I wrote above for some remarks.) > > > >> >>> +/**@}*/ > >> >>> + > >> >>> +/** > >> >>> + * Defines VA processing pipeline parameters > >> >>> + * > >> >>> + * This function binds the supplied VA @a display to a codec context > >> >>> + * @a avctx. > >> >>> + * > >> >>> + * The user retains full ownership of the display, and thus shall > >> >>> + * ensure the VA-API subsystem was initialized with vaInitialize(), > >> >>> + * make due diligence to keep it live until it is no longer needed, > >> >>> + * and dispose the associated resources with vaTerminate() whenever > >> >>> + * appropriate. > >> >>> + * > >> >>> + * @note This function has no effect if it is called outside of an > >> >>> + * AVCodecContext.get_format() hook. > >> >>> + * > >> >>> + * @param[in] avctx the codec context being used for decoding the stream > >> >>> + * @param[in] display the VA display handle to use for decoding > >> >>> + * @param[in] flags zero or more OR'd AV_HWACCEL_FLAG_* or > >> >>> + * AV_VAAPI_PIPELINE_FLAG_* flags > >> >>> + * @param[in] params optional parameters to configure the pipeline > >> >>> + * @return 0 on success, an AVERROR code on failure. > >> >>> + */ > >> >>> +int av_vaapi_set_pipeline_params(AVCodecContext *avctx, VADisplay display, > >> >>> + uint32_t flags, AVDictionary **params); > >> >>> > >> >>> /* @} */ > >> >>> > >> >>> diff --git a/libavcodec/vaapi_internal.h b/libavcodec/vaapi_internal.h > >> >>> index 29f46ab..958246c 100644 > >> >>> --- a/libavcodec/vaapi_internal.h > >> >>> +++ b/libavcodec/vaapi_internal.h > >> >>> @@ -36,6 +36,10 @@ > >> >>> */ > >> >>> > >> >>> typedef struct { > >> >>> + const void *klass; > >> >>> + uint32_t flags; ///< Pipeline flags > >> >>> + uint64_t user_display; ///< User-supplied VA display > >> >>> + uint32_t user_context_id; ///< User-supplied VA context ID > >> >>> VADisplay display; ///< Windowing system dependent handle > >> >>> VAConfigID config_id; ///< Configuration ID > >> >>> VAContextID context_id; ///< Context ID (video decode pipeline) > >> >> > >> >> _______________________________________________ > >> >> ffmpeg-devel mailing list > >> >> ffmpeg-devel at ffmpeg.org > >> >> > >> > > >> > Regards, > >> > -- > >> > Gwenole Beauchesne > >> > Intel Corporation SAS / 2 rue de Paris, 92196 Meudon Cedex, France > >> > Registration Number (RCS): Nanterre B 302 456 199 > >> > >> > >> > > > > _______________________________________________ > > ffmpeg-devel mailing list > > ffmpeg-devel at ffmpeg.org > > > > > | http://ffmpeg.org/pipermail/ffmpeg-devel/2015-August/177594.html | CC-MAIN-2019-26 | refinedweb | 2,114 | 55.03 |
28053Re: [midatlanticretro] Nov. 18 list maintenance
Expand Messages
- Nov 10, 2012On 11/10/2012 12:37 PM, Alexey Toptygin wrote:
>> "ygroupsblog.com". Sigh. I remember when Yahoo! used to be aPardon me while I vomit on the floor.
>> technical company. Now they don't even understand the concept of a
>> hierarchical namespace.
>
> But Dave, subdomains are for losers! Anyone that's anyone owns their own
> .com! Heck, every dev's VM at work has it's own .com! That we pay GoDaddy
> for! That resolves to an RFC 1918 IP! :-)
*HORRRRKKK!*
--
Dave McGuire, AK4HZ
New Kensington, PA
- << Previous post in topic Next post in topic >> | https://groups.yahoo.com/neo/groups/midatlanticretro/conversations/messages/28053 | CC-MAIN-2015-22 | refinedweb | 105 | 71 |
A Kalzium double bill today, with Kalzium gaining recognition in OsnaBrück University's annual prize giving, and this week's People Behind KDE interview. Find out everything you wanted to know about chemistry, the small print on toothpaste, and why not to visit Bavaria in the People Behind KDE interview with Carsten Niehaus, author of Kalzium. Read on for details of the prize.
KDE this week gained further recognition from the wider world. In an awards
ceremony at Osnabrück University in Germany, Carsten Niehaus won the
Intevation Prize for Achievements in Free Software for his work on Kalzium, KDE's interactive periodic table. The judges from Intevation praised the interactive features of Kalzium which help students by making facts easily discoverable. The prize was open to all past and present
members of the University.
Accepting a cheque for €750, Carsten said "It is an honour to be rewarded for my project Kalzium. I created it to have a good tool for my own use, but with a great community behind me I was able to develop something for others that I am proud of. A prize like this helps to keep up the motivation to improve Kalzium to be the tool of choice for teacher and student alike!" Kalzium, which takes its name from the German for the element calcium, supports many advanced features, including plotting data from all elements to show trends in the mass or atomic size for example. Its ease of use and range of features have won users in Osnabrück as well as further afield: Egon
Willighagen, lead developer of BlueObelisk and CDK, says, "Kalzium brings the
core chemistry in an easy-to-browse way to the desktop".
Kalzium is part of the KDE Edutainment project, which provides educational
software for all ages. "I am really pleased to see a KDE-Edu program getting another award. Kalzium's success is due to Carsten's constant efforts to improve his software and I am very proud to see him getting this award", Anne-Marie Mahfouf, a member of the KDE Edutainment team, said.
Good article and congradulations on the award Carsten! I use Kalzium some for my chemistry class. The type-ahead feature sounds great... I'd use that just for convenience. :)
Corrections:
"What was you most brillant hack" should be "what was your most brilliant hack"
And also there are two sections that the response is formatted in the same big blue font that the question is in. Not that it matters that much :)
-Sam
and "12 January 2006" should be "12 February 2006" :)
the interview says that many programs @ his school still run windows. I just want to know what programs are this ? ist there still stuff missing ?
btw. for electronic classes there ist ktechlab ()
ch
All the big schoolbook vendors only produce software for Windows, sometimes for Mac. So each and every single school-book software is missing (there are many such programs, perhaps about 3 per schoolbook)
The situation with 3d-viewers for proteins and other big molecules is now much butter than two years ago, but still those for windows are oftern better.
D-GISS. D-GISS is a software almost every (at least german) school has but which doesn't work in linux as it is somehow based on MS-Access. It is not used for teaching but more a database. Still, I would really like to be able to run D-GISS. It doesn't even install in wine.
There are many more, especially the situation with school-book software is a shame. And the vendors don't even answer email when you contact them about it.
Btw: About the drawing tools: Egon Willighagen pointed me to "his" java based drawing tool. It doesn't crash and is the best I know for Linux, but still ACDLabs is much better. Sorry that I have to say that :(
It looks nice and may in certain situations be usefull as a teaching tool, but in many ways ktechlab are a toy. Besides electronics are one of the areas where Linux support is good, several of the major vendors have Linux versions of their tools.
In other areas the situation are not that good, increased addoption of Java has improved the situation some. And you can even find helpfull tools in the form of Java applets scattered around the net. Usually very specialized, but usefull to help explain/understand different concepts. Like this collection:
Was wikipedia support ever added to kalzium? Or is it still planned/just a wish?
We need the Webapi of MediaWiki first, so it is a KDE4-task.
No, you dont need KDE4 - look how Amarok did it :-)
Amarok is only parsing the html the wikipedia spits out, removes the wikipedia-stuff and displays the content. What if the Wikipedia changes the html? Amarok would need to be patched. Also, the integration is much more than just displaying an article.
Wikipedia has special url to get the raw data for an article. I'd have to look it up, but it's something like ?raw
?action=raw for the raw wiki text e.g.
This is however not really useful, because you get the unformatted wiki text.
Why is that not really useful? I thought the pre-processed data is what was wanted to begin with? (That this needs to be processed first to embed images and links etc. is obvious, but this also offers added flexibility.)
We want far more, for example an API for:
"give me an article for the word "Enzyme". If that article is available in one of these languages return in one of these languages (in order of listing), if not return the english article [de,fr,it]"
That is not possible now. Or
"Give me articles related to the article "Enzyme""
I hope you can now see we need much more then "give me article 'x'".
I tried compiling it...
got some errors:
(...........)
make[3]: Entering directory `/root/Downloads/ktechlab-0.3/src'
source='itemgroup.cpp' object='itemgroup.o' libtool=no \
depfile='.deps/itemgroup.Po' tmpdepfile='.deps/itemgroup.TPo' \
depmode=gcc3 /bin/sh ../admin/depcomp \
g++ -DHAVE_CONFIG_H -I. -I. -I.. -I../src -I../src/drawparts -I../src/electronics -I../src/electronics/components -I../src/electronics/simulation -I../src/flowparts -I../src/gui -I../src/languages -I../src/mechanics -I../src/micro -I/opt/kde/include -I/usr/lib/q -c -o itemgroup.o `test -f 'itemgroup.cpp' || echo './'`itemgroup.cpp
itemgroup.cpp: In member function `void ItemGroup::slotDistributeHorizontally()
':
itemgroup.cpp:239: error: ISO C++ forbids declaration of `multimap' with no
type
itemgroup.cpp:239: error: template-id `multimap' used as a
declarator
itemgroup.cpp:239: error: parse error before `;' token
itemgroup.cpp:241: error: `DIMap' undeclared (first use this function)
itemgroup.cpp:241: error: (Each undeclared identifier is reported only once for
each function it appears in.)
itemgroup.cpp:244: error: `ranked' undeclared (first use this function)
itemgroup.cpp:244: error: `make_pair' undeclared in namespace `std'
itemgroup.cpp:249: error: ISO C++ forbids declaration of `DIMap' with no type
itemgroup.cpp:249: error: uninitialized const `DIMap'
itemgroup.cpp:249: error: parse error before `::' token
itemgroup.cpp:250: error: parse error before `::' token
itemgroup.cpp:250: error: name lookup of `it' changed for new ISO `for' scoping
itemgroup.cpp:243: error: using obsolete binding at `it'
itemgroup.cpp:250: error: `rankedEnd' undeclared (first use this function)
itemgroup.cpp:252: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:263: error: parse error before `::' token
itemgroup.cpp:265: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:265: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:265: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:268: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:249: warning: unused variable `const int DIMap'
itemgroup.cpp: In member function `void ItemGroup::slotDistributeVertically()':
itemgroup.cpp:285: error: ISO C++ forbids declaration of `multimap' with no
type
itemgroup.cpp:285: error: template-id `multimap' used as a
declarator
itemgroup.cpp:285: error: parse error before `;' token
itemgroup.cpp:290: error: `make_pair' undeclared in namespace `std'
itemgroup.cpp:295: error: ISO C++ forbids declaration of `DIMap' with no type
itemgroup.cpp:295: error: uninitialized const `DIMap'
itemgroup.cpp:295: error: parse error before `::' token
itemgroup.cpp:296: error: parse error before `::' token
itemgroup.cpp:296: error: name lookup of `it' changed for new ISO `for' scoping
itemgroup.cpp:289: error: using obsolete binding at `it'
itemgroup.cpp:298: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:309: error: parse error before `::' token
itemgroup.cpp:311: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:311: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:311: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:314: error: base operand of `->' has non-pointer type `
QValueListIterator'
itemgroup.cpp:295: warning: unused variable `const int DIMap'
make[3]: *** [itemgroup.o] Error 1
make[3]: Leaving directory `/root/Downloads/ktechlab-0.3/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/Downloads/ktechlab-0.3/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/Downloads/ktechlab-0.3'
make: *** [all] Error 2
bash-2.05b#
Any ideas?
I really want to use this for simulating circuits and things
If you want a quick&dirty solution, try to remove the -ansi parameters in the generated Makefile files.
Have a nice day!
I removed all instances of -ansi from all the generated Makefiles, but the errors didn't go away.
Upgraded to gcc-3.4 and the errors disapeared :-)
7 years of vim and still one keystroke too much... s/:wq/:x/ ;-)
Yes, that is the only one thing I cannot remember in vim :) I will always use :wq, sorry ;-)
:wq
In the Interview there are some HTML Tags missting.
Look at "Which text editor do you use? Why?" for example.
The whole opart is written as a headline.
Calle
to Carsten and all other ppl that contributed to Kalzium and the other kde-edu developers! you guys are doing a great job, its very cool to show off your apps and i'm sure they are being used, and will be used even more :D
And perhaps when KDE 4 arrives you can create a "Pro" version and earn some compensation?
This company does a nice one for OS X.
I know that table from the screenshots, looks pretty nice, yes.
If somebody want to donate money: I have a bank-account and an amazon-wishlist (german amazon). It would be really nice to recieve a DVD or two of course :-) But Kalzium will always be free as in beer and as in freedom.
Just a big "Thank you!" to Carsten and all the kdeedu developers for the amazing applications they are creating. My wife and I educate our children at home, and the KDE programs are proving extremely useful. Please keep up the good work!
What does he mean there is no Linux program for scientific drawing of compounds? That's just silly, so what has all the chemical physicists used all those years when the rest of the scientific world has been writing in TeX/LaTeX? Probably something like XyMTeX, that's what.
Jonas, of course you can draw them. I studied chemistry myself without touching non-free software. I used xfig, inkscape, latex, xdrawchem and so on. But those tools are absolutly not usable for a regular chemistry teacher. If you want Linux in School you need software for teachers and students alike.
Furthermore, ACDLabs is *much* better than any !windows solution out there. It is easy to use, fast, high quality, supports all kinds of calculations, has a very good 3d-mode, names molecules for you and so on. *That* is what we need, not a ChemTex-solution where you need to read 10 howtos to draw Acetylesalicyleacid! | http://dot.kde.org/comment/89332 | CC-MAIN-2014-10 | refinedweb | 1,994 | 58.48 |
PySense hardware versions 1.0 vs 1.1 with Lopy - battery lifetime issue
Hi, I'm working on a prototype with Pysense + Lopy + 3xAA batteries as power source.
When I'm using a Lopy (Firmware 1.17.5.b6) on my Pysense V1.1 (Firmware 0.0.8), the power consumption looks ok and the battery slowly drains.
If I take the exact same Lopy with no Software changes and put it on a Pysense V1.0 the battery drains much faster.
I have no idea how this is possible, since there should be no difference between V1 and V1.1!?
I just want to send temperature + humidity to my gateway and go back to deepsleep.
Here is my code:
(there a some useless parts - temp1, temp2, temp3 etc which I am sending over LORA, because if everything is working correctly, I want to send more data. But first I have to find my battery issue)
from pysense import Pysense from SI7006A20 import SI7006A20 from si7021 import SI7021 from network import LoRa import binascii import socket import struct import machine import time import pycom import utime from machine import I2C, Pin from machine import WDT import uos wdt = WDT(timeout=60000) # 60 sec - watchdog pycom.heartbeat(False) # turn off heartbeat _VERSION_SENSOR = 1.14 # A basic package header # B: 1 byte for the deviceId # B: 1 byte for the pkg size # B: 1 byte for the messageId # %ds: Formated string for string _LORA_PKG_FORMAT = "!BBB%ds" _LORA_PKG_ACK_FORMAT = "BBBB" _LORA_PKG_LONG_ACK_FORMAT = "BBBBB" # This device ID, use different device id for each device _DEVICE_ID = 0x04 wdt.feed() WIFI_MAC = binascii.hexlify(machine.unique_id()).upper() py = Pysense() if (py.read_fw_version() < 8): while(True): pycom.rgbled(0x001111) time.sleep(3) print('read_fw_version: '+str(py.read_fw_version())) print('uos.uname(): '+str(uos.uname())) si = SI7006A20(py) temp2 = si.temperature() hum = si.humidity() temp3 = 99.99 voltage = py.read_battery_voltage() temp1 = temp2 pressure = 90000 lowest = 0 highest = 0 myMessage = WIFI_MAC+ ';%.2f;%.2f;%.2f;%i;%i;%.0f;%.2f;%.2f' % (temp1, temp2, hum, lowest, highest, pressure, temp3, voltage) msg_id = 1 lora = LoRa( mode=LoRa.LORA, region=LoRa.EU868, frequency=868100000, power_mode=LoRa.TX_ONLY, tx_power=7, bandwidth=LoRa.BW_125KHZ, sf=9, preamble=8, coding_rate=LoRa.CODING_4_8, #CODING_4_8 tx_iq=False, rx_iq=False ) lora_sock = socket.socket(socket.AF_LORA, socket.SOCK_RAW) lora_sock.setblocking(True) pkg = struct.pack(_LORA_PKG_FORMAT % len(myMessage), _DEVICE_ID, len(myMessage), msg_id, myMessage) lora_sock.send(pkg) sleep = 60*15 if (voltage > 1): if (voltage < 3.33): sleep = 60*60*24 elif (voltage < 3.4): sleep = 60*60 elif (voltage < 3.5): sleep = 60*45 elif (voltage < 3.55): sleep = 60*30 py.setup_sleep(sleep) py.go_to_sleep()
Does anybody know, why there could be a difference between V1.0 and V1.1 of the Pysense?
What could I do, to extend the battery lifetime? If i use sf7 or sf8, my packets are not received reliable.
Any help is appreciated :)
Thanks in advance | https://forum.pycom.io/topic/3494/pysense-hardware-versions-1-0-vs-1-1-with-lopy-battery-lifetime-issue | CC-MAIN-2022-05 | refinedweb | 476 | 53.88 |
If you are an average internet user, then you interact with daemons every day. This article will describe what daemons do, how to create them in Python, and what you can use them for.
Daemon Defined.
dink:~ jmjones$ $(sleep 10; echo echo "WAKING UP";) & [1] 314 dink:~ jmjones$ WAKING UP
This backgrounded the sleep and echo commands. Ten seconds later, after sleep completed, the command echoed "WAKING UP" and appeared on my terminal. But just running a process in the background doesn't qualify it for daemon status. There are some deeper technical qualifications that an aspiring process has to meet in order to be branded with the daemon label.
Forking a Daemon Process
Following is the recipe "Forking a Daemon Process on Unix" from The Python Cookbook that will allow your Python code to daemonize itself.
import sys, os def daemonize (stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'): # Perform first fork. try: pid = os.fork( ) if pid > 0: sys.exit(0) # Exit first parent. except OSError, e: sys.stderr.write("fork #1 failed: (%d) %sn" % (e.errno, e.strerror)) sys.exit(1) # Decouple from parent environment. os.chdir("/") os.umask(0) os.setsid( ) # Perform second fork. try: pid = os.fork( ) if pid > 0: sys.exit(0) # Exit second parent. except OSError, e: sys.stderr.write("fork #2 failed: (%d) %sn" % (e.errno, e.strerror)) sys.exit(1) # The process is now daemonized, redirect standard file descriptors. for f in sys.stdout, sys.stderr: f.flush( ) si = file(stdin, 'r') so = file(stdout, 'a+') se = file(stderr, 'a+', 0) os.dup2(si.fileno( ), sys.stdin.fileno( )) os.dup2(so.fileno( ), sys.stdout.fileno( )) os.dup2(se.fileno( ), sys.stderr.fileno( ))
One way that a daemon process differs from a normal backgrounded task is that a daemon process disassociates from its calling process and controlling terminal. This recipe outlines the standard procedure for creating a daemon process. This procedure includes forking once, calling setsid to become a session leader, then forking a second time.
Along the way, it is common to also change directory to / to ensure that the resulting working directory will always exist. It also ensures that the daemon process doesn't tie up the ability of the system to unmount the filesystem that it happens to be in. It is also typical to set its umask to 0 so that its file creation is set to the most permissive.
After becoming a daemon, this Python example also sets its standard input (stdin), standard output (stdout), and standard error (stderr) to the values the caller specified.
A Good Candidate for Daemonization?.
Submit a Comment | https://www.datamation.com/netsys/article.php/3786866/Creating-a-Daemon-with-Python.htm | CC-MAIN-2018-43 | refinedweb | 437 | 59.3 |
Dave Thomas, one of the Pragmatic Programmers, has developed a series of exercises on his site codekata.com. The idea behind it is based on the concept that to get good at something you should "practice, practice, practice". Malcolm Gladwell calls this the 10,000 Hour Rule in his book Outliers.
While browsing the site, I came across Kata Six: Anagrams. Below is my attempt in Factor.
If you want to avoid spoilers and attempt this yourself, you might not want to read the rest of this post.
First, some preliminary imports and a namespace:
USING: arrays ascii assocs fry io.encodings.ascii io.files kernel math memoize sequences sorting strings ; IN: anagrams
One way to check if two words are anagrams is to sort their letters and compare. For example, "listen" and "silent" are anagrams of each other (i.e., when sorted, their letters are both "eilnst").
We will use this approach to take a list of words and create a mapping of their sorted letters to a list of words that are anagrams of each other. After we do that, we'll filter the map to only have words that have anagrams (where two or more words share the same mapping).
: (all-anagrams) ( seq assoc -- ) '[ dup natural-sort >string _ push-at ] each ; : all-anagrams ( seq -- assoc ) H{ } clone [ (all-anagrams) ] keep [ nip length 1 > ] assoc-filter ;
You can see it in action:
( scratchpad ) { "listen" "silent" "orange" } all-anagrams . H{ { "eilnst" V{ "listen" "silent" } } }
Now that we have that, we need a word list. The link on the original blog post no longer works, but most systems come with a dictionary, so we will use that (making all the words lowercase so that we can compare in a case-insensitive way).
MEMO: dict-words ( -- seq ) "/usr/share/dict/words" ascii file-lines [ >lower ] map ;
Given a list of dictionary words, we can calculate all anagrams:
MEMO: dict-anagrams ( -- assoc ) dict-words all-anagrams ;
On my MacBook Pro, I see 234,936 words and 15,048 groups of anagrams. Using these, we can write a word to look for anagrams by checking the dictionary.
: anagrams ( str -- seq/f ) >lower natural-sort >string dict-anagrams at ;
I chose to return
f if no anagrams are found. You can try it out:
( scratchpad ) "listen" anagrams . V{ "enlist" "listen" "silent" "tinsel" } ( scratchpad ) "banana" anagrams . f
The blog goes further and asks a couple questions:
- What sets of anagrams contain the most words?
- What are the longest words that are anagrams?
Both of these share a common process, which is to take a sequence and filter it for the elements that have the longest length:
: longest ( seq -- subseq ) dup 0 [ length max ] reduce '[ length _ = ] filter ;
This works pretty simply:
( scratchpad ) { "a" "ab" "abc" "abcd" "hjkl" } longest . { "abcd" "hjkl" }
Now we can write the words to answer those two questions:
: most-anagrams ( -- seq ) dict-anagrams values longest ; : longest-anagrams ( -- seq ) dict-anagrams [ keys longest ] keep '[ _ at ] map ;
The answer is? The set of anagrams containing "groan" is the most (10 words). And two anagrams are tied for longest: "pneumohydropericardium/hydropneumopericardium" and "cholecystoduodenostomy/duodenocholecystostomy". Wouldn't you know, they'd be medical words...
The code for this is available on my Github.
1 comment:
elegant simple code.
good solution! well done | http://re-factor.blogspot.com/2010/08/anagrams.html | CC-MAIN-2017-13 | refinedweb | 548 | 70.43 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.