text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Also, the “fake covariance” trick. Sometimes, Scala programmers notice a nice optimization they can use in the case of a class that has an invariant type parameter, but in which that type parameter appears in variant or phantom position in the actual data involved. =:= is an example of the phantom case. sealed abstract class =:=[From, To] extends (From => To) with Serializable scala.collection.immutable.Set is an example of the covariant case. Here is the optimization, which is very similar to the Liskov-lifting previously discussed: a “safe” cast of the invariant type parameter can be made, because all operations on the casted result remain sound. Here it is for Set, an example of the “fake covariance” trick: override def toSet[B >: A]: Set[B] = this.asInstanceOf[Set[B]] And here it is for =:=, an example of the “singleton instance” trick. private[this] final val singleton_=:= = new =:=[Any,Any] { def apply(x: Any): Any = x } object =:= { implicit def tpEquals[A]: A =:= A = singleton_=:=.asInstanceOf[A =:= A] } Unless you are using the Scalazzi safe Scala subset, which forbids referentially nontransparent and nonparametric operations, these tricks are unsafe. Many people are confused that they cannot write functions like this: def addone[A](x: A): A = x match { case s: String => s + "one" case i: Int => i + 1 } Being given an error as follows. <console>:8: error: type mismatch; found : String required: A case s: String => s + "one" ^ <console>:9: error: type mismatch; found : Int required: A case i: Int => i + 1 ^ Let’s consider only one case, the first. In the right-hand side (RHS) of this case, you have not proved that A is String at all! You have only proved that, in addition to definitely having type A, x also definitely has type String. In type relationship language, x.type <: A x.type <: String All elephants are grey and are also animals, but it does not follow that all grey things are animals or vice versa. If you use a cast to “fix” this, you have produced type-incorrect code, period. Under special circumstances, however, information about a type parameter can be recovered, safe and sound. Take this: abstract class Box[A] case class SBox(x: String) extends Box[String] case class IBox(x: Int) extends Box[Int] def addone2[A](b: Box[A]): A = b match { case SBox(s) => s + "one" case IBox(x) => x + 1 } This compiles, and I don’t even have to have data in the box to get at the type information that A ~ String or A ~ Int. Consider the first case. On the RHS, I have b.type <: SBox <: Box[String] b.type <: Box[A] In addition, A is invariant, so after going up to Box[String], b couldn’t have widened that type parameter, or changed it in any way, without an unsafe cast. Additionally, our supertype tree cannot contain Box twice with different parameters. So we have proved that A is String, because we proved that Box[A] is Box[String]. This is very useful when defining GADTs. Let’s consider a similar ADT with the type parameter marked variant. abstract class CovBox[+A] case class CovSBox(x: String) extends CovBox[String] def addone3[A](b: CovBox[A]): A = b match { case CovSBox(s) => s + "one" } This works too, because in the RHS of the case, we proved that: b.type <: CovSBox <: CovBox[String] <: CovBox[A] String <: A The only transform in type A could have possibly undergone is a widening, which must have begun at String. A similar example can be derived for contravariance. In our first example, there is one type that we know must be a subtype of A, no matter what! def addone[A <: AnyRef](x: A): A = x: x.type (Scala doesn’t like it when we talk about singleton types without an AnyRef upper bound at least. But the underlying principle holds for all value types.) Where x is an A of stable type, x.type <: A for all possible A types. You might say, “that’s uninteresting; obviously x is an A in this code.” But that isn’t what we’re talking about; our premise is that any value of type x.type is also an A! So if we could prove that something else had the singleton type x.type, we would also prove that it shared all of x’s types! We can do that with a singleton type pattern, which is implemented (soundly in 2.11) with a reference comparison. Scala lets us use some of the resulting implications. final case class InvBox[A](b: A) def maybeeq[A, B](x: InvBox[A], y: InvBox[B]): A = y match { case _: x.type => y.b } To which you might protest, “there’s only one value of any singleton type!” Well, yes. And here’s where our seemingly innocent optimization turns nasty. If you’ll recall, it depends upon treating a value with multiple types via an unsafe cast. def unsafeCoerce[A, B]: A => B = { val a = implicitly[A =:= A] implicitly[B =:= B] match { case _: a.type => implicitly[A =:= B] } } def unsafeCoerce2[A, B]: A => B = { val n = Set[Nothing]() val b = n.toSet[B] n.toSet[A] match { case _: b.type => implicitly[A =:= B] } } Both of these compile to what is in essence an identity function. scala> Some(unsafeCoerce[String, Int]("hi")) res0: Some[Int] = Some(hi) scala> Some(unsafeCoerce2[String, Int]("hi")) res1: Some[Int] = Some(hi) In our invariant Box example we decided that, as it was impossible to change the type parameter without an unsafe cast, we could use that knowledge in the consequent types. In unsafeCoerce, where ? represents the value before the match keyword: ?.type <: a.type <: (A =:= A) ?.type <: (B =:= B) A ~ B In unsafeCoerce2, ?.type <: b.type <: Set[B] ?.type <: Set[A] A ~ B There is nothing wrong with Scala making this logical inference. The “optimization” of that cast is not safe. Let me reiterate: Scala’s type inference surrounding pattern matching should not be “fixed” to make unsafe casts “safer” and steal our GADTs. Unsafe code is unsafe. For types like these, it is not possible to exploit this unsafety without a reference check, which is what a singleton type pattern compiles to. As the Scalazzi safe subset forbids referentially nontransparent operations, if you follow its rules, these optimizations become safe again. This is just yet another of countless ways in which following the Scalazzi rules makes your code safer and easier to reason about. That isn’t to say it’s impossible to derive a situation where the optimization exposes an unsafeCoerce in Scalazzi code. However, you must specially craft a type in order to do so. abstract class Oops[A] { def widen[B>:A]: Oops[B] = this.asInstanceOf[Oops[B]] } case class Bot() extends Oops[Nothing] def unsafeCoerce3[A, B]: A => B = { val x = Bot() x.widen[A] match { case Bot() => implicitly[A <:< B] } } The implication being ?.type <: Bot <: Oops[Nothing] ?.type <: Oops[A] Nothing ~ A Scalaz uses the optimization under consideration in scalaz.IList. So would generalized Functor-based Liskov-lifting, as discussed at the end of “When can Liskov be lifted?”, were it to be implemented. However, these cases do not fit the bill for exploitation from Scalazzi-safe code. On the other hand, the singleton type pattern approach may be used in all cases where the optimization may be invoked by a caller, including standard library code where some well-meaning contributor might add such a harmless-seeming avoidance of memory allocation without your knowledge. Purity pays, and often in very nonobvious ways. This article was tested with Scala 2.11.1. Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog
https://typelevel.org/blog/2014/07/06/singleton_instance_trick_unsafe.html
CC-MAIN-2019-13
refinedweb
1,294
65.32
Host Platform: Windows Version: vtune_amplifier_2019.4.0.597835 Analysis: HPC Performance Characterisation [Remote Linux (SSH)] Target Platform: Ubuntu 16.04 Kernel: 4.4.0-135-generic Problem: We have a program that uses mmap to read large dataset files (tens-of-GB). However, it seems that VTunes recognize everything mmapped as shared object files. It tries to copy from the target into the host via pscp everything the program has mmapped. This is undesirable because it is very slow to transfer tens-of-GB via network and will drain the host system storage. It is useless because VTunes will eventually state a warning that this file "... has an unsupported or invalid file format" because it is just a data file, not shared objects. To repeat, run the following program: #include <assert.h> #include <sys/mman.h> #include <sys/stat.h> #include <fcntl.h> #include "boost/filesystem.hpp" int main(int argc, char* argv[]) { const char* path = "/mnt/nvme0n1/very_large_about_1GB_dataset_file"; size_t file_bytes = boost::filesystem::file_size(path); int fd = open(path, O_RDONLY); assert(fd >= 0); void *data = mmap(NULL, file_bytes, PROT_READ, MAP_PRIVATE, fd, 0); assert(data != MAP_FAILED); printf("Content %.*s\n", 8, static_cast<char *>(data)); std::vector<char> vec; for (size_t i=0; i<file_bytes; i++) { vec.push_back(static_cast<char *>(data)); } int err = munmap(data, file_bytes); assert(err == 0); err = close(fd); assert(err == 0); return 0; } Link Copied Hi. Let's us check that. Kirill Ok, we've reproduced that on our side. Have some ideas how that can be fixed. Thanks for your reproducer.
https://community.intel.com/t5/Analyzers/VTunes-remote-analysis-with-programs-using-mmap-extremely-slow/td-p/1173130
CC-MAIN-2021-39
refinedweb
254
61.43
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello I'm working with pyside2. How to invoke clean-up methods in c4d enviroment? For example, i made app and if close it or check visibilty,then del app from PySide2.QtWidgets import QApplication app = QApplication.instance() // steps with dialogs if win.isVisible() == False: del app clean objects, any garbage collection in memory Hi @iluxa7k thanks for reaching out us. With regard to your question, as already pointed out by @zipit , the Python del statement just is only responsible for decrementing the reference counter for the instance being "deleted" and not to actually free the memory used by the instance itself. This is clearly noted in the official Python documentation on the Note coming along with object.__del__(self) in Python 3 or object.__del__(self) in Python 2 . Given that, also consider that the del app statement you've described in your first post might not actually deliver the "clean-up" action you're thinking about. del object.__del__(self) del app Best, Riccardo Hi, to my knowledge, there is nothing special going on in this regard with Cinema's Python interpreter. Also note that invoking del does technically not enforce an object to be garbage collected even in the most of vanilla Python interpreters. It simply removes a reference to an object from the current scope. If the reference count for that object is then zero on the next collection cycle, it might be collected. Even if you have just one reference to an object and you remove that reference, it might still linger in memory for quite a while after that. Long story short: You cannot really enforce the deallocation of memory in Python. Cheers, zipit
https://plugincafe.maxon.net/topic/12647/make-clean-up-after-script/1
CC-MAIN-2022-27
refinedweb
323
64.51
Xaml Meets ASP.NET MVC – Create databound visualizations with Xaml + ASP.NET MVC using XamlAsyncController For some time, I was thinking about leveraging the power of Xaml to create databound images/visualizations in web applications, and did a quick POC implementation of the same. Here is the demonstration. We’ll start from the final output, and see how quickly you can render a Xaml control as an image from your ASP.NET MVC Controller – after binding that to your ViewData or view model. [+] Related Code: Go get it, from codeplex Generating data bound images using Xaml The gist of this article is how you can create dynamic or data bound images from Xaml user control. When I mention dynamic or data bound image - what I mean is, taking a snapshot of your Xaml user controls that are bound to the view data or view model. So that, your images will be generated based on the view data/model. How to do this? The related source code (find the link above) contains a XamlAsyncController implementation, to make the whole task easier in ASP.NET MVC. Let us go through an example - If you see, there are two images in the below web page – and in this case, both images are generated from Xaml files. The Views\Home\Index.aspx view that generates the above page is as below. <%@ Page Home Page </asp:Content> <asp:Content <h2><%: ViewData["Message"] %></h2> <image src="/Home/ShowMessage" /> <h2>A simple dashboard</h2> <image src="/Dashboard/Chart" /> </asp:Content> How to render a Xaml from the Controller? Now, let us see how you can use XamlAsyncController to implement Xaml image rendering in your own ASP.NET application. If you see the aspx code above, you may note that the first image is generated by the action ShowMessage in Home Controller, and the second image is generated from the Chart action in Dashboard controller. To render a Xaml Usercontrol as an image, basically you need to do two things. 1 – Inherit your Controller from XamlAsyncController, and add an Async action for serving the image You need to create a reference from your ASP.NET MVC to the MvcXamlController.Lib (included in the download below). For example, let us see how the ShowMessage action is implemented for serving the image. If you look at the HomeController, you can see that the HomeController is inherited from XamlAsyncController, a custom abstract controller I implemented, inheriting from AsyncController, that’s already there in ASP.NET MVC 2 (Read about using Asynchronous Controller if you are not familiar with Asynchronous controllers). You just call StartRendering() method in your ShowMessageAsync, and return XamlView() in your ShowMessageCompleted. [HandleError] public class HomeController : XamlAsyncController { public void ShowMessageAsync() { ViewData["Message"] = "Welcome from Xaml in MVC"; StartRendering(); } public ActionResult ShowMessageCompleted() { return XamlView(); } } 2 – Add your Xaml file to the path /Visualizations/{Controller}/{Action}.xaml Once you have your controller’s action to serve an image as above, the next step is to add a Xaml file. Before that, you should It should be in the path /Visualizations/{Controller}/{Action}.xaml (This is synonymous to adding a View in the path /Views/{Controller}/{Action}.aspx SideNote: This is going to be a bit tricky. First thing is, you need to add references to WindowsBase, PresentationUI, PresentationCore dlls to your ASP.NET MVC project, which is already done in the demo project. Then you need to manually copy a xaml file from some where else to the required path, as Visual Studio ‘Add New Item’ dialog box won’t display xaml files when you try to add a new item inside your ASP.NET MVC Project. See how I’m placing the ShowMessage.xaml, under the Visualizations\Home folder. And you are done. XamlAsyncController is smart enough to fetch the xaml file from the location /Visualizations/{Controller}/{Action}.xaml by convention, and render it as an image. And of course, in your Xaml, you can bind to the ViewData. Let us have a look into our ShowMessage.xaml <UserControl x: <Grid> <Border BorderBrush="Gray" BorderThickness="2" Background="WhiteSmoke" CornerRadius="5" Padding="4"> <StackPanel> <TextBlock Text="I am a WPF control rendered as image"/> <TextBlock Text="{Binding Message}"/> </StackPanel> </Border> </Grid> </UserControl>And you can find that we are binding to the Message element we pumped into the ViewData, from the controller. This is generating the first image in the rendered html file. Another Example – A Simple Chart Image Now, let us see how to create a simple chart in Xaml, based on your viewmodel data?. Let us add a new Controller to our ASP.NET MVC App, DashboardController.cs. And we have an async action, named Chart. See that we are using an overload of StartRendering to pass some dummy view model data. public class DashboardController : XamlAsyncController { public void ChartAsync() { var items=new List<Item>() { new Item{Name="Windows7", Region="US", SalesNo=100 }, new Item{Name="Office2010", Region="US", SalesNo=70 }, new Item{Name="Sharepoint", Region="US", SalesNo=20 }, }; StartRendering(viewModel:items); } public ActionResult ChartCompleted() { return XamlView(); } } And yes, we have our Chart.xaml in the location /Visualizations/Dashboard/Chart.xaml – and here is how it looks like. <UserControl x: <Grid Background="WhiteSmoke"> <ItemsControl ItemsSource="{Binding .}" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel HorizontalAlignment="Left"> <TextBlock Text="{Binding Name}" HorizontalAlignment="Left" Height="24" /> <Rectangle HorizontalAlignment="Left" Width="{Binding SalesNo}" Height="10" Fill="Red"/> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> </UserControl> More Thoughts And Going Forward We havn’t yet explored the actual implementation of my XamlAsyncController, and that is for another post. If you are so curios, you can definitely have a look at the source related code. And remember, this is an experimental implementation, with few hacks here and there. There are various cases that needs to be addressed – for example, the rendering will not be proper for elements that has some loading animation etc. Also, I havn’t yet run this outside the ASP.NET Development server, and hasn’t really done any performance analysis. How ever, I think this is a good start towards exploring how we can leverage Xaml in web applications. [Note: Read Part II about this] Laurent Bugnion also blogged about some of his thoughts about rendering Xaml from server side, how ever I still need to have a look into his code. Also, Keep in touch :) follow me in twitter or subscribe to this blog. Interested in ASP.NET MVC? Read my articles about creating a custom view engine for ASP.NET MVC or using Duck Typed View Models in ASP.NET MVC
http://www.amazedsaint.com/2010/07/xaml-meets-aspnet-mvc-create-databound.html
CC-MAIN-2018-17
refinedweb
1,091
55.95
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Getting field value from Wizard? Hello Odoo community, I created a wizard which has 2 fields start date and end date, with in the wizard user can choose the date and im trying to use that date value to filter invoice report but for some reason i cant retrieve my start date from the wizard it comes as empty all the time. Any suggestion or any ideas what im doing wrong Code example from openerp import fields,api from openerp.osv import orm class post_invoice_wizard(orm.TransientModel): _name = 'post.invoice.wizard' start_date = fields.Date('Start Date') end_date = fields.Date('End Date') @api.model def _post_invoice_ids(self): for rec in self: post_invoice_ids=self.env['posted.invoice.report'].search([('date_invoice', '>=', rec.start_date)]) return post_invoice_ids Hi Darius, I have doubt in @api.model which you have used for your method and I guess you should use @api.multi instead. Documentation: Hope this will help you. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/getting-field-value-from-wizard-92646
CC-MAIN-2017-47
refinedweb
207
60.11
Over-21 Game Shell and eTCL Slot Calculator Demo Example , numerical analysis converting a slot calculator into the Over-21 game shell.Odds for successive draws in a 52 card deck can be found for Blackjack as follows. Probability of drawing successive aces would be set probability_2_aces {(4 /52)* ( 3/ 51)}, 0.004524. In blackjack, a winning hand of two cards would be "ace & face ", where ace is counted as 11 and face as 10. Probability of drawing successive 2 cards for winning hand would be would be set probability_ace_&_face {(4/52)*(16/51)}, 0.02413. The dealer wins ties, so the dealer will win, even if a dream winning hand is dealt to both player and dealer. For example, four successive cards as first ace&face to the player and second ace&face to dealer would be set double_winner {(4/52)*(16/51)*(3/50)*(15/49)}, 0.0004432 . However, some dealers use multiple 2-8 decks and alternate dealing of cards to multiple players. The probability for multiple decks was set probability_multiple_decks (N1*4-S1+1)/(N1*52-S1+1)*(F1/(N1*52-S2+1))...). For example , winning hand for 2 decks was {(8/104)*(32/103)}, 0.02389. For 8 decks, the winning hand of ace&face would be {(32/416)*(128/415)}, 0.02372.There are differences between the Over-21 calculator and the Blackjack game on the tables. Both hands of 2 cards each for dealer and player are shown in display of Over-21 calculator. Whereas on the tables, only one card of the hand must be shown until dealer calls or payoff. Also the Over-21 calculator is using random selection on 52 card deck, not pulling cards off deck like the tables. On the tables, the order of dealt cards to dealer and more than one player might makes a difference in computed probability. On the tables, the ability to call the end of the game, knowledge of played top cards, and winning all ties are considerable advantages to dealer.General guidelines for computer games suggest the player should win one in five times to keep interest up. Right now, the Over-21 game shell is using the first deal of two cards to estimate win for the player. The winning conditions for the Over-21 game are 1) player score greater than dealer, 2) player score greater than ~17, and 3) player score less than 22 (not busted). On the tables, the dealer could win on the second or next deal, since the dealer calls or closes the game. Most of the tables use additional decks (2-8) and pull the dealt cards, which could be implemented in the program with concat or remove list element statements. The next steps for the Over-21 game shell should be tracking the win conditions for the dealer and a button for second deal. Blackjack ProbabilityProbability of drawing successive 2 cards for winning hand in Blackjack would be would be set probability_ace_&_face {(4/52)*(16/51)}, 0.02413. In a similar manner, the probability of face_&_ace would be set probability_face_&_ace{(16/51)*(4/52)}, 0.02413. The probability of winning hand on first draw would be (p(ace_&_face)+ face_&_ace ),(0.02413+0.02413), 0.04826. Converting to fraction, (1/0.04826)= 20.721094, the odds of winning a Blackjack hand on first draw was rounding 1/21. Meaning, the player has the expectation of winning one in 21 games.For two successive cards dealt from a 52 card deck and asking an extra draw in Blackjack, the breakpoint at 50% can be estimated for the probability of going busted. Suppose a player held a 16 from the first 2 card draw, then the player asking for an extra card dealt might thrown the hand over 21. Cards 6,7,8,9,10,J,Q,K=8 cards will raise the hand over 21, so the probability of the extra card would be 8*4 suits/ 52, converting 8*4/13*4, reduction 8/13,0.615, 61.5 percent. For a hand of 15 drawing an extra card, cards 7,8,9,10,J,Q,K=7 cards will bust. The probability of 15 and extra card causing bust would be 7*4 suits/ 52, converting 7*4/13*4, reduction 7/13,53.84 percent. A hand of 14 & extra card leads to cards 8,9,10,J,Q,K=6 cards for bust. The probability of 14 and extra card would be 6*4 suits/ 52, converting 6*4/13*4, reduction 6/13,46.15 percent. While the analysis does not account for cards pulled from a 52 card deck or multiple decks, the 50 percent breakpoint is between holding a hand of 14 and a hand of 16 for the extra card leading to player bust. The 50 percent breakpoint is why a player will often stand or hold at 16, figuring 16 as a probable win .The hold of 16 for the player can be used in the Over-21 game shell. Pushbutton OperationFor Deal button. The command { calculate; reportx } or { calculate ; reportx; clearx } can be added or changed to report automaticallyFor the push buttons in the Over-21 game shell, the recommended procedure is push Testcase and fill frame to initialize the display, then push Deal. Deal displays the first 2 card hands for dealer and player, estimating the state of the game. The cards in Over-21 are picked randomly from a list of 52 cards, not pulled from the multiple decks like the tables. The face cards are counted as 10, the ace cards are counted as 11, and other cards 2-10 at face value. Where both hands total less than 16, the winner of the Over-21 game is unresolved or undeclared. If a player is 16 or higher integer over the other player (dealer), that is considered a probable win, until resolved in the next round. A clear winner is a player that scores 21 or watches the other player bust, go over 21. The internal logic is set so that dealer wins all ties.If additional deals are necessary, push Extra_Deal to give each player one card. The extra cards are initialized at zero in the first deal and are not picked or reported until the Extra-Deal button is pushed for the Nth time. Holds are implemented foe the player only. Without hold, it is possible for both players to go over 21 on a deal. Normally at the tables, the dealer would hold ( no extra card ) on the near underside of 21 and not go out of the game with the player. Recommend pushing Report after Deal or Extra_Deal to preserve a game history, especially when multiple rounds of extra deals are played. Report allows copy and paste from console to a texteditor.The Hold_Extra command places an experimental and one time hold of >16 on the extra deal for the player. The hold of 16 effects players hands of 16 and higher, standing or holding for one deal. Hold_Extra will have no effect on hands lower than 16 and is not used for the first deal (hold of logic 0). The hold is withdrawn on the player with the execution of the extra deal and the rule is one push, one time of hold. Just for grins, the card picked during the player hold is filed in the Report, but not added to player score. Placing simultaneous experimental holds on both dealer and player was tried, but the program logic kept locking up (and not sure how to evaluate a round where both parties pass).The Clear command essentially nulls the Over-21 display to zeros and resets the initial variables, mostly to zero. Pushing a testcase refreshs the display with some new cards, but does not reset the internal testcase series etc.Though rare, some Over-21 games lasted 4 or 5 deals in the testing, where small cards were picked often in the early deals. Some tables honor a special rule that a player staying in the game for 5 deals is declared a winner. Additional Nth Extra-Deals are possible until both the dealer and player exceed 21, causing end of game reported as game state 505. Since this program is under test, after the end of game (505) recommend push Report, Clear, Testcase, and Deal to be sure all variables are saved and then cleared to restart the sequence. The best hint is that if the player is a probable winner on the first deal or reaches 21, take the money and run...staying at the tables for extra deals will not make you rich. Pseudocode and Equations dealer and player dealt two initial cards dealer wins all ties ace card is 1 or 11 all face cards are 10 all ten cards are 10 sum of cards > 21 set lose condition (bust) sum of dealer >sum of player =< 21, player in winner state sum of player = >sum of dealer =< 21, dealer in winner state player or dealer may sit turn, called hold condition player has 5 cards not over 21 is winner (rare) Player and dealer must show one card only, until close or payout. Normally dealer holds at 16 and above Normally player wants extra deal if hand <16 Double down, optional if player dealt two cards of same number, but not same suit. Player on double down must put double money up for two hands. 2 Testcase 3 Testcase 4For a system of dealing 2 cards from a 52 card deck, the sample space would be factorial(52)/((factorial(2)*factorial(52-2)) or 52!/(2!*50!), reduction 1326. Some generic expressions can be developed for the Over-21 coding. The factorial procedure used below was posted by Suchenworth. Most of these TCL expressions can be pasted into the console window and square brackets may be text substituted from wiki format.If one uses a reference to probability of getting Blackjack on a single deck, then the advantage to the house of multiple decks can be estimated for two card draw. Using the prob_win_21 formula, the win probability for a series of multiple decks ( 2 3 4 5 6 7 8 9 10 ... infinity) can be subtracted from the probability using a single deck. A small console program can be written using the foreach statement. % proc f n { expr {$n < 2 ? 1 : $n * [f [incr n -1]]} } #RS % set space [ expr { [f 52]/[f 2]/[f 50]} ] ;# generic TCL answer% 1326 % set space_ace [ expr { [f 4]/[f 1]/[f 3]} ] ;# generic TCL answer% 4 set space_face [ expr { [f 16]/[f 1]/[f 15]} ] ;# generic TCL answer% 16 probability of getting Blackjack(1) single deck is (4.*16.)/1326. answer% 0.04826 Blackjack(3) from 3*4=12, 3*16=48 Blackjack(3)= (12*48)/12090,0.04764 set space [ expr { [f 416]/[f 2]/[f 414]} ] 86320 8*4=32, 8*16=128 Blackjack(8)= (32*128)/86320 , 0.047451 if {sum of cards == 21}{ set winner state (blackjack) } if {16 > sum of cards < 21} {set probable winner state} trial black box flow. player> Testcase> Deal> Hold_Extra>Extra_Deal> Report>Hold_Extra>Extra_Deal>Report>End (505)>cut&paste game for later study>Clearx Pop routine on flow list should automate button commands. player> game> multiple game states> cashier> states are multiple during game, cashier pays one time > list of payouts and collection cashier's list < {0.001 unresolved} {2 +$pay} {3.001 +$pay} {4 no_pay }{5 -$pay} {6 -$pay} {7 wash_out} {8 wash_out}> Screenshots Section figure 1. References: - Blackjack Game Project Report - Andrew Thomas BSc (Hons) - New Blackjack Odds By Jeff Haney, article discussion of 6/5 payoffs Appendix Code edit appendix TCL programs and scripts # pretty print from autoindent and ased editor # Over-21 Game Shell global hold_status hold_keeper deal_number global stake1 stake2 payout1 payout2 set names {{} {dealer first card:} } lappend names {dealer second card:} lappend names {player first card: } lappend names {player second card :} lappend names {dealer score :} lappend names {player score : } lappend names {player need card dealt? < 0 or 1 > : } lappend names {game status < unr=0,1,2,3,4,5,6,end=505 >:} Over 21 Game Shell from TCL WIKI, written on eTCL value > game_status 0 >unresolved game or no winner declared. 1 >player is probable winner, lacking extra deal. 2 >dealer is busted, player is declared winner. 3 >player has 21, player is declared winner. 4 >dealer probable winner, lacking extra deal. 5 >player is busted, dealer is declared winner. 6 >dealer has 21, dealer is declared winner. 505 >game over, previous deal had declared winner or both dealer and player over 21 value , both busted. Note: Normally both players receive one card on extra deal. Extra_Hold is pushed for one time hold >16 on player. Player can be probable winner on first deal and player can lose on extra deal, like tables. " tk_messageBox -title "About" -message $msg } proc lpick L {lindex $L [expr int(rand()*[llength $L])];} proc swapx { text } { #set text [string map [list $item $replacewith] $text] #RS set text [string map [list "A" "11.0"] $text] set text [string map [list "J" "10.0"] $text] set text [string map [list "Q" "10.0"] $text] set text [string map [list "K" "10.0"] $text] return $text } proc cashier { } { global side1 side2 side3 side4 side5 global side6 side7 side8 global stake1 stake2 payout1 payout2 fired1 fired2 global testcase_number deal_number puts " &|test payout for status | $side8 | | need 6 iffys |&" puts " &| original stake | 1000 solar credits| | |& " set bet_norm 100. set payout1 0. set payout2 0. if { $side8 == 505. || $side8 > 500. } { return $payout2 } if { $fired1 == 1 || $fired2 == 1 } { return $payout2 } if { $side8 == 2. || $side8 == 3. } { set payout1 [* $bet_norm [/ 3. 2. ] 1. ] } if { $side8 == 2. || $side8 == 3. } { set payout2 [* $bet_norm [/ 6. 5. ] 1. ] } if { $side8 == 5. || $side8 == 6. } { set payout1 [* $bet_norm -1. ] } if { $side8 == 5. || $side8 == 6. } { set payout2 [* $bet_norm -1. ] } if { $side8 == 2. || $side8 == 3. } { set fired1 1 } if { $side8 == 5. || $side8 == 6. } { set fired2 1 } set stake1 [+ $stake1 $payout1 ] set stake2 [+ $stake2 $payout2 ] puts " &|test payout1 | $payout1 | | |& " puts " &|test payout2 | $payout2 | | |& " puts " &|test stake1 at 3/2 table| $stake1 | current payout | $payout1 |& " puts " &|test stake2 at 6/5 table| $stake2 | current payout | $payout2 |& " return $payout2 } proc winner_judge { } { global side1 side2 side3 side4 side5 global side6 side7 side8 global stake1 stake2 payout1 payout2 global testcase_number deal_number puts " dealer $side5 against player $side6 " puts "logic values p>16 [ expr { $side6 > 16. } ]" puts "logic values p<16 [ expr { $side6 < 16. } ] " puts "logic values p>d [ expr { $side6 > $side5 } ] " puts "logic values p>17 [ expr { $side6 > 17. } ] " puts "logic values p<22 [ expr { $side6 < 22. } ] " if { $side6 > 16. } { set side7 0. } if { $side6 < 16. } { set side7 1. } if { $side6 > $side5 && $side6 < 22. && $side6 > 17. } { set side8 1. } if { $side6 < 22. && $side5 > 21. } { set side8 2. } if { $side6 == 21. && $side6 > $side5 } { set side8 3. } if { $side5 > 16. && $side5 > $side6 && $side5 < 22. } { set side8 4. } if { $side6 > 21. && $side5 < 22. } { set side8 5. } if { $side5 == 21. && $side5 > $side6 } { set side8 6. } if { $side5 == 21. && $side6 == 21. } { set side8 6. } if { $side6 > 21. && $side5 > 21. } { set side8 505. } if { $side6 > 35. || $side5 > 35. } { set side8 505. } set stake5 [ cashier ] return 5. } proc calculate { } { global answer2 global side1 side2 side3 side4 side5 global side6 side7 side8 global list_cards list_values global dealer_1extra_card player_1extra_card global stake1 stake2 payout1 payout2 fired1 fired2 global hold_status hold_keeper global testcase_number deal_number incr testcase_number set deal_number 0 incr deal_number set cards { 2 3 4 5 6 7 8 9 10 J Q K A } set list_values { 2 3 4 5 6 7 8 9 10 10 10 10 11 } set list_cards [ concat $cards $cards $cards $cards ] set dealer_1extra_card 0 set player_1extra_card 0 set hold_status 0 set hold_keeper 0 if { $testcase_number == 1 && $deal_number == 1 } { set stake1 1000. } if { $testcase_number == 1 && $deal_number == 1 } { set stake2 1000. } set payout1 0. set payout2 0. set fired1 0. set fired2 0. puts "lists cards in current deck" puts "$list_cards" set side7 0.0 set side8 0.0 set side1 [ lpick $list_cards ] set side2 [ lpick $list_cards ] set side3 [ lpick $list_cards ] set side4 [ lpick $list_cards ] set side5 [+ [swapx $side1 ] [swapx $side2 ] ] set side6 [+ [swapx $side3 ] [swapx $side4 ] ] set side5 [* $side5 1. ] set side6 [* $side6 1. ] set winner [ winner_judge ] puts " first deal activated " } proc second_deal { } { global side1 side2 side3 side4 side5 global side6 side7 side8 global dealer_1extra_card player_1extra_card global stake1 stake2 payout1 payout2 global hold_status hold_keeper deal_number global list_cards list_values global testcase_number #incr testcase_number incr deal_number set dealer_1extra_card [ lpick $list_cards ] set player_1extra_card [ lpick $list_cards ] set side5 [+ $side5 [ swapx $dealer_1extra_card ] ] if { $hold_status < 1 || $side5 < $hold_status } { set side6 [+ $side6 [ swapx $player_1extra_card ] ] } set winner [ winner_judge ] puts " second or extra deal activated, dealing one extra card " puts " for dealer, $dealer_1extra_card valued at [ swapx $dealer_1extra_card ] " puts " for player, $player_1extra_card valued at [ swapx $player_1extra_card ] " } {} { global side1 side2 side3 side4 side5 global side6 side7 side8 global stake1 stake2 payout1 payout2 fired1 fired2 global testcase_number deal_number global hold_status hold_keeper global dealer_1extra_card player_1extra_card set payout1 0. set payout2 0. set testcase_number 0 set deal_number 0 set fired1 0 set fired2 0 set dealer_1extra_card 0 set player_1extra_card 0 set hold_status 0 set hold_keeper 0 foreach i {1 2 3 4 5 6 7 8 } { .frame.entry$i delete 0 end } } proc reportx {} { global side1 side2 side3 side4 side5 global side6 side7 side8 global dealer_1extra_card player_1extra_card global hold_status hold_keeper global stake1 stake2 payout1 payout2 global testcase_number deal_number console show; puts "%|table $testcase_number|printed in| tcl wiki format|% " puts "&|value| quanity | comment, if any|& " puts "&|$testcase_number|testcase number| |&" puts "&| $deal_number :|deal_number |&" puts "&| $side1 :|dealer first card| |&" puts "&| $side2 :|dealer second card| |& " puts "&| $dealer_1extra_card :|dealer extra card| if any Nth round|& " puts "&| $side3 :|player first card| |& " puts "&| $side4 :|player second card | |&" puts "&| $player_1extra_card :|player extra card | if any Nth round, not added under hold|&" puts "&| $hold_keeper :|hold status >16 on player| one time hold,if any Nth round |& " puts "&| $side5 :|dealer score | |&" puts "&| $side6 :|player score | |&" puts "&| $payout1 :|table payout at 3:2 | if dealer calls|&" puts "&| $payout2 :|table payout at 6:5 | if dealer calls|&" puts "&| $side7 :|player need card dealt? <0 or 1 >| |&" puts "&| $side8 :|game status < unr=0,1,2,3,4,5,6,end=505 > | |&" } frame .buttons -bg aquamarine4 ::ttk::button .calculator -text "Deal" -command { fillup 0 0 0 0 0 0 0 0 ;calculate } ::ttk::button .calculator2 -text "Extra_Deal" -command { second_deal ; set hold_keeper $hold_status; set hold_status 0 ; } ::ttk::button .calculator3 -text "Hold_Extra" -command { set hold_status 16 } ::ttk::button .test2 -text "Testcase1" -command {clearx;fillup 1.0 7.0 10.0 10.0 8. 20.0 0. 0.} ::ttk::button .test3 -text "Testcase2" -command {clearx;fillup 5.0 10.0 2.0 8.0 15. 10. 1. 0. } ::ttk::button .test4 -text "Testcase3" -command {clearx;fillup 4.00 5.0 5.0 6.0 9.0 11.0 1.0 0. } ::ttk::button .clearallx -text clear -command {clearx; fillup 0 0 0 0 0 0 0 0 } : .calculator3 .calculator2 -side bottom -in .buttons grid .frame .buttons -sticky ns -pady {0 10} . configure -background aquamarine4 -highlightcolor brown -relief raised -border 30 wm title . "Over-21 Game Shell Calculator " Console program for list of Unicode cards # written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI, 15oct2014 #Console program for list of Unicode cards and blackjack values package require Tk namespace path {::tcl::mathop ::tcl::mathfunc} console show global deck suit set deck {} set suitfrenchblack { \u2665 \u2666 \u2663 \u2660 } set suitfrenchblanc { \u2664 \u2661 \u2662 \u2667 } proc deal {args} { global deck suit foreach suit { \u2665 \u2666 \u2663 \u2660 } { foreach item $args { set kay "$item $suit " set kay "[append item $suit \ $item ]" lappend deck $kay }} return $deck } deal 2 3 4 5 6 7 8 9 10 A J Q K puts "$deck" # output {2♥ 2} {3♥ 3} {4♥ 4} {5♥ 5} {6♥ 6} {7♥ 7} {8♥ 8} gold This page is copyrighted under the TCL/TK license terms, this license Please place any comments here, Thanks.
http://wiki.tcl.tk/40829
CC-MAIN-2017-26
refinedweb
3,329
70.73
Designing an RPG character class where they have certain attributes. import java.util.Random; public class Character { private String name; private int strength; private int dexterity; private int intelligence; private int hp; private Random randomiser; private int stat = 16; public Character(String name) { this.name = name; randomiser = new Random(); //random stats between 3 and 18 strength = 3 + randomiser.nextInt(stat); dexterity = 3 + randomiser.nextInt(stat); intelligence = 3 + randomiser.nextInt(stat); hp = strength; } public String getName() { return name; } ... //Does damage to the character's hp public void gotHit(int dmg) { hp = hp - dmg; } //checks if hp is less than or equal to 0, returns true if dead; false if not dead. //resets hp back to 0 public boolean isDead() { if(hp < 0) { System.out.print("Character is dead"); hp = 0; return true; } else { return false; } } public int getHp() { return hp; } public String toString() { String info = (name + ", Strength: " + strength + ", Dexterity: " + dexterity + ", Intelligence: " + intelligence + ", Hit Points: " + hp ); return info; } driver ... Character Chicken = new Character("Chicken"); Chicken.gotHit(100); System.out.println(Chicken.toString()); ... The problem is that when I do "damage" it subtracts it from hp, but it's not checked by the isDead() method when it's printed in the toString() method. So I get a negative number instead of 0 being printed. Hope you can help.
http://www.dreamincode.net/forums/topic/107598-class-method-problems/
CC-MAIN-2016-44
refinedweb
215
56.66
Best way to strip punctuation from a string From an efficiency perspective, you're not going to beat s.translate(None, string.punctuation) For higher versions of Python use the following code: s.translate(str.maketrans('', '', string.punctuation)) It's performing raw string operations in C with a lookup table - there's not much that will beat that but writing your own C code. If speed isn't a worry, another option though is: exclude = set(string.punctuation)s = ''.join(ch for ch in s if ch not in exclude) This is faster than s.replace with each char, but won't perform as well as non-pure python approaches such as regexes or string.translate, as you can see from the below timings. For this type of problem, doing it at as low a level as possible pays off. Timing code: import re, string, timeits = "string. With. Punctuation"exclude = set(string.punctuation)table = string.maketrans("","")regex = re.compile('[%s]' % re.escape(string.punctuation))def test_set(s): return ''.join(ch for ch in s if ch not in exclude)def test_re(s): # From Vinko's solution, with fix. return regex.sub('', s)def test_trans(s): return s.translate(table, string.punctuation)def test_repl(s): # From S.Lott's solution for c in string.punctuation: s=s.replace(c,"") return sprint "sets :",timeit.Timer('f(s)', 'from __main__ import s,test_set as f').timeit(1000000)print "regex :",timeit.Timer('f(s)', 'from __main__ import s,test_re as f').timeit(1000000)print "translate :",timeit.Timer('f(s)', 'from __main__ import s,test_trans as f').timeit(1000000)print "replace :",timeit.Timer('f(s)', 'from __main__ import s,test_repl as f').timeit(1000000) This gives the following results: sets : 19.8566138744regex : 6.86155414581translate : 2.12455511093replace : 28.4436721802 Regular expressions are simple enough, if you know them. import res = "string. With. Punctuation?"s = re.sub(r'[^\w\s]','',s) For the convenience of usage, I sum up the note of striping punctuation from a string in both Python 2 and Python 3. Please refer to other answers for the detailed description. Python 2 import strings = "string. With. Punctuation?"table = string.maketrans("","")new_s = s.translate(table, string.punctuation) # Output: string without punctuation Python 3 import strings = "string. With. Punctuation?"table = str.maketrans(dict.fromkeys(string.punctuation)) # OR {key: None for key in string.punctuation}new_s = s.translate(table) # Output: string without punctuation
https://codehunter.cc/a/python/best-way-to-strip-punctuation-from-a-string
CC-MAIN-2022-21
refinedweb
396
53.37
Hi, - EtherNEC hangs with interrupts from stopped card - ethernec needs to bebuilt.NP, I will try a few more things to make ethernec behave better (right now you have to use module arguments to set the base address, unnecessarily.NP, I will try a few more things to make ethernec behave better (right now you have to use module arguments to set the base address, unnecessarily. I finally found out why the driver was even getting interrupt delivered. Turns out I did not use a wrapper for ei_interrupt in the usual case ... now where did my brown paper bag go?I finally found out why the driver was even getting interrupt delivered. Turns out I did not use a wrapper for ei_interrupt in the usual case ... now where did my brown paper bag go? Patch attached; this goes on top of 2.6.26-m68k (i.e. with the current EtherNEC patch applied). Should apply to Geert's tree with minimum fuss.The driver loads as module without need of parameters now. It should also load normally when compiled in (we'll see that in the next d-i iteration, won't we?). Michael(who hasn't forgotten about the various driver cleanups promised; it's been a tad busy lately) Use wrapper around ei_interrupt to prevent delivery of (timer!) interrupts before card is opened. Limit IO addresses to 0x300 (this is hardwired on the bus adapter anyway, and the card needs to be programmed to use that IO in some way before the adapter can work). Preset io=0x300 for module use. Signed-off-by: Michael Schmitz <schmitz@debian.org> atari_ethernec.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) --- --- linux-2.6.26-geert/drivers/net/atari_ethernec.c 2008-07-18 01:21:51.000000000 +0200 +++ linux-2.6.26-ms/drivers/net/atari_ethernec.c 2008-09-27 05:42:37.000000000 +0200 @@ -124,7 +124,7 @@ /* A zero-terminated list of I/O addresses to be probed at boot. */ #ifndef MODULE static unsigned int netcard_portlist[] __initdata = { - 0x300, 0x280, 0x320, 0x340, 0x360, 0x380, 0 + 0x300, 0 }; #endif @@ -227,6 +227,14 @@ static struct net_device *poll_dev = NULL; +irqreturn_t atari_ei_interrupt(int irq, void *dev_id) +{ + struct net_device *dev = dev_id; + if (netif_running(dev)) + return ei_interrupt(dev->irq, dev); + return IRQ_NONE; +} + static void atari_ethernec_int(struct work_struct *work) { struct net_device *dev = poll_dev; @@ -619,7 +627,7 @@ mfp.tim_ct_cd = (mfp.tim_ct_cd & 0xf0) | 0x6; } /* Must make this shared in case other timer ints are needed */ - ret = request_irq(dev->irq, ei_interrupt, IRQF_SHARED, name, dev); + ret = request_irq(dev->irq, atari_ei_interrupt, IRQF_SHARED, name, dev); if (ret) { printk(" unable to get IRQ %d (errno=%d), polling instead.\n", dev->irq, ret); @@ -941,9 +949,9 @@ #ifdef MODULE -#define MAX_NE_CARDS 4 /* Max number of NE cards per module */ +#define MAX_NE_CARDS 1 /* Max number of NE cards per module */ static struct net_device *dev_ne[MAX_NE_CARDS]; -static int io[MAX_NE_CARDS]; +static int io[MAX_NE_CARDS] = { 0x300 }; static int irq[MAX_NE_CARDS]; static int bad[MAX_NE_CARDS]; /* 0xbad = bad sig or no reset ack */
https://lists.debian.org/debian-68k/2008/09/msg00111.html
CC-MAIN-2015-14
refinedweb
496
64.1
Inspecting application state with the SOS debugging tools dustin.🤔 Updated on ・4 min read This post originally appeared on Faithlife.Codes. In this post, we'll cover how to use the SOS debugging tools to inspect variables from a process dump of a .NET Framework / .NET Core application. Required: Obtaining a memory dump In this first example, we'll use a running ASP.NET MVC 5 application hosted by IIS, but the steps here can be used on a normal .NET framework Windows application. Let's start by taking a full memory dump of a running application. Download ProcDump and copy it to the server that runs the application you want to debug. Obtain the process ID from the application you want to profile by using Task Manager, and then pass it as an argument to procdump. procdump -ma <pid> You should now have a dump named similar to w3wp_171229_151050.dmp in the working directory. Note: If you're running several applications under a single app pool in IIS, it may be easier to debug by changing the app to run under its own application pool, which allows the ASP.NET app to run under a dedicated process. Inspecting the ASP.NET application state (.NET Framework) Now that we have a memory dump, it's time to look at the suspended state of the application. Copy the dump file to your workstation, and then open it via File > Open Crash Dump in WinDBG. Your screen should look like this: Load the SOS debugging extension, which will allow us to inspect the managed threads: !loadby sos clr Then, list the stack trace of every thread: !eestack Note: If get an exception when running this command and you are using IIS Express, try the command again. There appears to be a bug that throws an exception only for the first command run from WinDbg, which should not affect the rest of your debugging session. You should see a lot of threads in the output. To narrow the results down, search for the namespace of your project in the output text. We can see that there is an external web request being made in Thread 34. Let's look at what external URL is being requested. Switch to the thread, and then run clrstack -p to get some more detailed information about each method call. ~34 s !clrstack -p Note: You may see many arguments that contain the value <no data>. This can be caused by compiler optimizations; inspecting the state of these parameters is beyond the scope of this article. The controller is present in this call stack, so let's inspect the object instance by clicking on the this instance address, which is a shortcut for the !DumpObj command. This instance contains a field named _request, which contained a field named requestUri, which has the original URI for this request: That's it! The commands vary slightly for dumping different field types. .NET Core application on Linux Required: - LLDB 3.9 - Locally-built copy of the SOS plugin in the CoreCLR repo - instructions In this next scenario, we'll look at inspecting a core dump from a .NET Core app running on an Ubuntu x64 instance. The instance will have a core dump taken while a request is processing, which we will then inspect. Take a core dump of the process using the createdump utility. These commands assume you have the coreclr repo checked out to ~/git/coreclr, and that you're running an application built with .NET Core 2.0. sudo ~/git/coreclr/bin/Product/Linux.x64.Debug/createdump -u (pid) Load the dump in LLDB. This command also loads the SOS debugging extension. lldb-3.9 dotnet -c /tmp/coredump.18842 -o "plugin load ~/git/coreclr/bin/Product/Linux.x64.Debug/libsosplugin.so" After a few moments, a CLI will become available. Run eestack to dump the state of all CLR threads. If you get an empty output or a segmentation fault, verify that you are running the correct version of lldb and are loading libsosplugin from the bin directory, and that you have created the core dump with createdump. eestack There is an instance of HomeController in the stack of Thread 17. Switch to it to reveal more information about the current request. This time, we'll inspect the state of an internal .NET Core request frame, since information about the current request isn't as accessible as it was in ASP.NET MVC 5. thread select 17 sos DumpStackObjects Look for the address of Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Frame'1[[Microsoft.AspNetCore.Hosting.Internal.HostingApplication+Context, Microsoft.AspNetCore.Hosting]] in the output, and then dump the object. The name of this class might differ slightly based on the version of the framework you're running. Identify the QueryString field address: Dumping that field reveals the query part of the URL the browser requested! Thanks to Kyle Sletten, Justin Brooks, and Bradley Grainger for reviewing early drafts of this post. Further reading: - SOS.dll (SOS Debugging Extension) - .NET Core Debugging Instructions - Building CoreCLR on OS X - Building LLDB - Required for debugging .NET Core on OS X - .NET Core MiniDump Format How do *you* pronounce sudo?
https://dev.to/dustinsoftware/inspecting-application-state-with-the-sos-debugging-tools-i3a
CC-MAIN-2020-16
refinedweb
870
66.44
Opened 9 months ago Last modified 9 months ago #28999 new Cleanup/optimization Document how to use class-based views if you want to reverse by them by instance Description This is primarily an issue with CBV get_view() method. However this affects basically all url reverse resolution as urls.base reverse() -> urls.resolvers get_resolver() -> urls.resolvers RegexURLResolver -> utils.datastructures MultiValueDict -> dict() __getitem__() which uses the key's hash value. I discovered this while attempting to reverse a class based view. No matter what I tried I could not get a reverse url. Quite a bit of testing and digging later, the issue appears to be that {{{as_view()}} creates a new view function on every call, and every one of these functions has a unique hash. This means that using the result of one call to "as_view" as the key in MultiValueDict results in a value you can not retrieve with a subsequent call. from testapp.views import MyCBView from django.utils.datastructures import MultiValueDict, MultiValueDictKeyError lookups = MultiValueDict() first_call = MyCBView.as_view() second_call = MyCBView.as_view() # Does Not Work lookups[first_call] = "Test Retrieval" try: test = lookups[second_call] except MultiValueDictKeyError: print("Lookup Failed {} != {}".format(first_call.__hash__(), second_call.__hash__())) # Works test = lookups[first_call] print("Lookup Succeeded test={}".format(test)) I am fairly certain that it is not intended (and certainly not documented) that you must store the return of "as_view", and use that stored value in your urlconf, if you ever want to be able to reverse using a CBV instance. Change History (7) comment:1 Changed 9 months ago by comment:2 follow-up: 4 Changed 9 months ago by I think it may be possible to fix by simply caching the view function when it is created and from then on returning the cache, but I haven't yet looked into how this would effect the initwargs. Hmm, I'm not sure how to best describe my preference. It's part of a larger experimentation with the framework. I'm trying to experiment with using get_absolute_url on my models, and I have CBV DetailView classes for my models. Firstly it seems round-about to use a URL name that represents a url->view mapping when I know the view I want. The View Class for a given Model is a core relationship that should never change, if someone uses my Models they should want to use my Views for those model. The URL name "should" not change, but it is perfectly reasonable someone may want to use my models and my views without using my urlconf. The url schema (and thus URL name) could be considered a separate matter to the models/views so I would like to de-couple these things in the code for my Models/Views. I hope that makes sense? Secondly I would ideally like to find a way to create a 'reverse' method that finds the url regardless of the namespacing. (So a user of my app would not have to update my models.py if they wanted to include my urlconf under a namespace). Obviously this won't work with URL names, but since every function should have a unique hash, I believe I can make it work using view functions... But that is currently impossible for CBV because of this issue. I'm probably trying to be far to fancy for my own good here, but that is the reason for my preference. I admit, it is likely that for 99% of people, documenting that it doesn't work would probably be a satisfactory fix. comment:3 Changed 9 months ago by So I think I have a possible fix, but I'm not yet set up for contributing to Django so I haven't run the test suite on it. Preliminary testing appeared to work. I would be interested in hearing others thoughts before investing more time into this. @classonlymethod def as_view(cls, **initkwargs): """ Main entry point for a request-response process. """ for key in initkwargs: if key in cls.http_method_names: raise TypeError("You tried to pass in the %s method name as a " "keyword argument to %s(). Don't do that." % (key, cls.__name__)) if not hasattr(cls, key): raise TypeError("%s() received an invalid keyword %r. as_view " "only accepts arguments that are already " "attributes of the class." % (cls.__name__, key)) class callable_view(object): # Instance of callable_view is callable and acts as a view function def __call__(self, request, *args, **kwargs): self = cls(**initkwargs) if hasattr(self, 'get') and not hasattr(self, 'head'): self.head = self.get self.request = request self.args = args self.kwargs = kwargs return self.dispatch(request, *args, **kwargs) def __hash__(self): return hash(cls) # Class' 'view function' hash is defined by the Class def __eq__(self, other): if not hasattr(other, '__hash__'): return False return hash(self) == hash(other) def __repr__(self): return repr(self.__call__.__func__) view = callable_view() view.view_class = cls view.view_initkwargs = initkwargs # take name and docstring from class update_wrapper(view, cls, updated=()) update_wrapper(view.__call__.__func__, cls, updated=()) # and possible attributes set by decorators # like csrf_exempt from dispatch update_wrapper(view.__call__.__func__, cls.dispatch, assigned=()) return view comment:4 follow-up: 6 Changed 9 months ago by Replying to airstandley: So I think I have a possible fix, but I'm not yet set up for contributing to Django so I haven't run the test suite on it. Preliminary testing appeared to work. I would be interested in hearing others thoughts before investing more time into this. <snip> The callable_view would compare equal regardless of its initkwargs, I don't think that's the behaviour we'd want. We can compare the initkwargs as well, but that becomes kinda iffy when more complex arguments don't necessarily compare equal, in which case we get the same problem but in a less obvious way that's harder to debug. I'd much rather document the pattern of calling .as_view() once in views.py and using the result as if it were a function-based view (if that's not yet documented), e.g.: class MyDetailView(DetailView): ... my_detail_view = MyDetailView.as_view() Then you can use my_detail_view as if it were a function-based view, and reversing would work as expected. I think that should solve your usecase as well without requiring changes to the way .as_view() and reverse() work. comment:5 Changed 9 months ago by I don't think the pattern that Marten described is documented. comment:6 Changed 9 months ago by Replying to Marten Kenbeek: The callable_viewwould compare equal regardless of its initkwargs, I don't think that's the behaviour we'd want. <snip> Alright, thanks for the insight. I was not entirely sure what the intention of the initkwargs was. If the intent is that a CBV will be used more than once with different initkwargs then I can not think of any approach with callable_view that would work. The latest docs suggest using CBV.as_view() in the urlconf urlpatterns. Would you change all of those references or add an addendum stating the downside of that approach and documenting the pattern that Marten described? comment:7 Changed 9 months ago by I would add a note and not change existing examples. I think reversing by instance isn't a common need. I'm not sure if that's worth trying to fix (besides documenting that it doesn't work) -- is there a reason your prefer reversing that way as opposed to using a URL name?
https://code.djangoproject.com/ticket/28999
CC-MAIN-2018-43
refinedweb
1,241
64
I purchased an red, blue, and green LOL shield and received Wed. The directions were unclear how to populate the board. I soldered all the leds with the positive lead to the right and all oriented in the same direction. The board seems to be working but i am unclear as to how to load the arduino examples or whether to load in processing. I am a novice. I can send code to the arduino and get it to light up certain combinations of leds. I copied the lol shield library to both arduino and processing. Neither program recognizes the library. When I open one of the examples in the library, in either program, I get error messages. For example, when i load "Life" into arduino, and run, i get an error, "LedSign has not been declared" and: Life.cpp: In function 'void loop()': Life.pde:-1: error: 'LedSign' has not been declared Life.pde:-1: error: 'LedSign' has not been declared " So i figured maybe i was supposed to load standard firmata into arduino and load "life" example into processing. I get the following message: "unexpected char: "i". #include <Charliplexing.h> //Imports the library, which needs to be //Initialized in setup. at processing.mode.java.JavaMode.handleRun(JavaMode.java:176) at processing.mode.java.JavaEditor$20.run(JavaEditor.java:481) at java.lang.Thread.run(Thread.java:680)
https://forums.adafruit.com/viewtopic.php?f=31&t=30132&p=188197
CC-MAIN-2016-30
refinedweb
229
59.4
There's this code in App/Management.py: def manage_workspace(self, REQUEST): """Dispatch to first interface in manage_options """ options=self.filtered_manage_options(REQUEST) try: m=options[0]['action'] if m=='manage_workspace': raise TypeError except: raise Unauthorized, ( 'You are not authorized to view this object.') (*) if m.find('/'): raise 'Redirect', ( "%s/%s" % (REQUEST['URL1'], m)) return getattr(self, m)(self, REQUEST) Advertising My question is about the marked block. I'd guess that the intent is to send a redirect if m (== options[0]['action']) contains a '/'. But m.find('/') evaluates to false only if m[0] == '/', otherwise it yields either -1 (which is true), if there's no '/' in m, or something greater 0, if there's a slash after the first char. Is this intended behavior or a bug? cheers, oliver _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg13282.html
CC-MAIN-2016-44
refinedweb
145
56.15
Look who's BizTalk'in - Notes from a consultant in the field on all things integration I have been having fun working on the middle tier of an application which is using BizTalk, Windows Workflow, MSMQ and the Windows Communication Framework. One of the requirements is to processes messages we receive from a legacy system through MSMQ in FIFO order. We used WCF to communicate with MSMQ and used the msmqIntegrationBinding binding since the legacy application was placing the messages on the queue. The FIFO processing seemed to be little more difficult to iron out. It turns out that by default a service's InstanceContextMode is PerSession. If the channel is a datagram then the IntanceContextMode degrades to PerCall. So what happens is that the WCF runtime will create a new service instance for each available request/message up to the MaxConcurrentCalls/MaxConcurrentInstance setting utilizing multiple threads. This was not going to provide my FIFO processing. So, to set WCF to only pull and process one message at a time you need to set the service's InstanceContextMode to Single along with setting the ConcurrencyMode to single. The following code shows these settings: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Single, )] public class MSMQEventService : IMSMQEventInterface, IDisposable { Once these settings are set you will achieve FIFO processing and WCF will only utilize a single thread and will process one message at a time from the single queue. My next entry will cover processing multiple queues and processing each queue in FIFO order. Ah, the StarBucks is starting to kick in... BizTalk/WCF/MSMQ/WF/CSD/Orcas Neudesic's own David Pallmann In my previous entry on Processing FIFO MSMQ Messages using WCF I said that I would cover processing In two of my previous posts I talked about processing MSMQ messages in FIFO order with WCF. The way that
http://blogs.msdn.com/skaufman/archive/2007/09/16/processing-fifo-msmq-messages-using-wcf.aspx
crawl-002
refinedweb
307
52.9
Struts2 under JBoss ?rop Mar 31, 2011 12:05 PM Hi, Anyone here using Struts2 with Jboss6? What versions should I use? I am just trying to get the simplest "HelloWorld"-type webapp to execute, but keep running into different problems. I first tried Struts version 2.1.8.1. I build with maven3 and the application runs OK on Tomcat. On Jboss however, I first got an error "Failed to parse source: cvc-datatype-valid.1.2.1: '2.2.3' is not a valid value for 'decimal'" Same as described here: I read that one workaround was to turn off schema-validation in JBOSS_HOME\server\default\deployers\jbossweb.deployer\META-INF\war-deployers-jboss-beans.xml by adding <bean name="TldParsingDeployer" class="org.jboss.deployment.TldParsingDeployer"> ... <property name="useSchemaValidation">false</property> </bean> Is there a better solution/workaround to the above? Because I then got another deployment-error instead: [com.opensymphony.xwork2.util.FileManager] Could not create JarEntryRevision!: java.io.FileNotFoundException: vfs:\C:\program1\jboss6\server\default\deploy\msglist-webapp.war\WEB-INF\lib\struts2-sitemesh-plugin-2.1.8.1 This one I dont understand, and didnt find a solution for. The file it complaints about C:\program1\jboss6\server\default\deploy\msglist-webapp.war\WEB-INF\lib\struts2-sitemesh-plugin-2.1.8.1.jar DOES exist on the server. I also tried the struts version in my POM to 2.2.1.1, but get a similar problem. Anyone can tell me what I am missing or doing wrong here? What are the versions to use and the basic things you need to do to get Struts2 running at all? 1. Struts2 under JBoss ?Elias Ross Mar 31, 2011 3:23 PM (in response to rop) Google found me this: You should try Struts 2.2.1 2. Re: Struts2 under JBoss ?rop Mar 31, 2011 6:02 PM (in response to Elias Ross) Hi Elias, Yes, that also solves the first problem I described above. But what about the second one? Thats where I really get stuck... BR 3. Re: Struts2 under JBoss ?rop Mar 31, 2011 6:45 PM (in response to rop) Looks like it's related to this jira: The problem they fixed there in Struts-2.2.2 was that it had support for Jboss URL protocols vfszip: and vfsmemory: but not for vfsfile: So vfsfile: was added in this fix. But now... in my error msg above it complaints with another protocol, vfs: Is vfs: by any chance, something that was recently added in Jboss6, and now need to add that, too, in Struts2 ? Any ideas? 4. Re: Struts2 under JBoss ?rop Mar 31, 2011 7:55 PM (in response to rop) Heureka... I downloaded the source-code for org.apache.struts.xwork : xwork-core and built a custom-version where I added protocol "vfs" in URLUtil.java And finally, the Struts2-application runs OK in Jboss-6. But what an ordeal... I just wonder... since google didnt turn this up... am I the first person in the universe to discover that Struts2 doesnt work with Jboss6 ? 5. Re: Struts2 under JBoss ?Elias Ross Apr 1, 2011 12:33 AM (in response to rop) rop, It'd be helpful to file a bug with a patch with your changes. It's possible that you're the first with the problem, or at least the first to post here with the problem. 6. Struts2 under JBoss ?rop Apr 4, 2011 1:53 PM (in response to Elias Ross) Turned out, in spite of a long Exception-stacktrace in the server-log, it was just a warning and the application actually worked... Struts2/xwork team had a Jira for it already. 7. Struts2 under JBoss ?jaikiran pai Apr 5, 2011 2:19 AM (in response to rop) rop wrote: Struts2/xwork team had a Jira for it already. Could you please post a link to that JIRA? 8. Struts2 under JBoss ?rop Apr 5, 2011 4:41 AM (in response to jaikiran pai) Here the jira: xwork does not support the VFS of jboss-6.0Final 9. Re: Struts2 under JBoss ?Rob Juurlink Aug 28, 2011 10:01 AM (in response to rop) Today I had some issues deploying my Struts 2 application on JBoss 6 and found this message. I am using Struts 2.2.1.1 and the convention-plugin. Because of the vfs JBoss error, my action classes cannot be found. "There is no Action mapped for namespace xxx and action xxx.". So it is not in all circumstances a warning only. It is fixed in Struts 2.3 (see Jira link), but unfortunately that version has other issues with JBoss 6. It has something to do with javassist that is now part of ognl. JBoss provides its own version of javassist that clashes with ognl's version. 10. Re: Struts2 under JBoss ?Wolfgang Knauf Aug 29, 2011 7:57 AM (in response to Rob Juurlink) Hi, which Javassist version is bundled with Struts 2.3? Does it help to simply remove it and rely on the Javassist 3.12.0 bundled with JBoss 6.1.0? If not: you might take a look at JBoss classloading: Best regards Wolfgang 11. Re: Struts2 under JBoss ?Rob Juurlink Aug 29, 2011 8:17 AM (in response to Wolfgang Knauf) Hi, I solved my issue with Struts2. Indeed Javassis 3.12.0.GA is included through struts2-core 2.2.3 -> ognl 3.0.1. I have changed my pom.xml to include a dependency for Javassist with scope "provided". The resulting war does not include javassist.jar now. During startup, everything is fine. <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.12.0.GA</version> <scope>provided</scope> </dependency> See also this thread of the Struts-Dev mailing list for details:
https://community.jboss.org/message/597016
CC-MAIN-2015-27
refinedweb
973
68.97
A Programming Language with Extended Static Checking Recently, I’ve been working on improving the core framework that underpins the Whiley compiler. This provides a platform for reading/writing files of specified content in a structured fashion. Like Java, Whiley provides a hierarchical namespace in which names live and can be imported by others. Let’s consider a simple example: package zlib.core import Console from whiley.lang.System import zlib.util.BitBuffer Here, we have two modules that must exist in the global namespace: whiley.lang.System and zlib.util.BitBuffer. As with Java, they co-exist in the same namespace, but do not necessarily originate from the same physical location (i.e. whiley.lang.System is located in the wyrt.jar, whilst zlib.util.BitBuffer is in a file system directory somewhere). whiley.lang.System zlib.util.BitBuffer wyrt.jar The Whiley compiler takes care of this through the Path.ID and Path.Root abstractions. A Path.ID represents a hierarchical name in the global namespace (e.g. whiley.lang.System); a Path.Root represents a physical location which forms the root of a name hierarchy (e.g. a jar file or a directory). Thus, the global namespace is made up from multiple roots and, to find a given item, we traverse them looking for it (we’ll ignore the possibility of collisions for simplicity). To illustrate, here’s the (slightly simplified) Path.ID interface: Path.ID Path.Root jar public interface ID { /** * Get number of components in this ID. * ... */ public int size(); /** * Return the component at a given index. * ... */ public String get(int index); /** * Get last component of this path ID. * ... */ public String last(); /** * Get parent of this path ID. * ... */ public ID parent(); /** * Append component onto end of this id. * ... */ public ID append(String component); } This all seems simple enough, right? Well, yeah it is! So, what’s up? Well, the thing is, the Whiley compiler is not the first system to address this problem! Eclipse, for example, adopts a similar approach through the IPath interface. A cut-down version of this interface is: IPath public interface IPath { /** * Returns the specified segment of this path, ... * ... */ public String segment(int index); /** * Returns the last segment of this path, ... * ... */ public String lastSegment(); /** * Returns the number of segments in this path. * ... */ public int segmentCount(); /** * Returns whether this path is a prefix of the given path. * ... */ public int isPrefixOf(IPath path); /** * Return absolute path with segments and device id. * ... */ public IPath makeAbsolute(IPath path); ... } Hopefully, you’ll notice both similarity and difference between the Path.ID and IPath interfaces. In fact, IPath has quite a few more methods which are not present in Path.ID. However, it should be clear that the functionality described by Path.ID is a subset of that described by IPath (albeit with slightly different names). Obviously, I want to integrate the Whiley compiler with Eclipse (i.e. make an Eclipse plugin for Whiley). At the same time, I want to reuse as much of Eclipse’s functionality as possible within my plugin — otherwise, I’m just adding more bloat to an already bloated system, and potentially compromising the effectiveness of my plugin. I’m prepared to go to some lengths to enable this, even to the point of changing my Path.ID interface to bring it more inline with IPath; however, I’m not prepared to make the Whiley Compiler depend upon Eclipse — that is, no import org.eclipse..* statements in the Whiley compiler. import org.eclipse..* How can I do this? Well, the obvious solution is to provide an Adaptor in the plugin which implements Path.ID and wraps an IPath. This probably requires adding a Factory interface (e.g. Path.Factory) for creating Path.ID instances, since the Whiley compiler needs the ability to construct Path.ID instances and test for their validity. Path.Factory The adaptor works, right? Yeah, it does. But, it amounts to layering one abstraction on top of another when they’re really the same thing. It would be nice if there was a mechanism for binding abstractions together. For example, in the Eclipse Plugin, it would let me say: let an IPath be a valid Path.ID with the following binding between names. But, perhaps that’s too much wishful thinking… Why not just make Path.ID’s interface compatible with IPath, so you can swap out the naked Path.ID interface with the IPath as a base interface (ahh inheritance of interfaces ) My guess is that IPath will probably provide more methods then Path.ID, so you might want Path.ID to actually be an abstract class that can fill/stub the functionality. Technically you would only use Path.ID extends IPath when you want eclipse support (via an optional jar or similar) so you wouldn’t _require_ eclipse. Hi Andrew, Yeah, so basically have different versions of Path.ID, depending on whether I’m compiling for Eclipse or stand-alone. Yeah, that’s workable … but still not exactly elegant D I was thinking about swapping the class out at run-time, I’m not sure how tolerate Java is about that. But compile time would work too. Obviously when building new things that use Path.ID you’ll have to test against both versions. you want clojure protocols/multimethods or haskell type classes in clojure Path.ID would be a protocol that the plugin could extend to IPath java style interfaces have the problem of not being injectable/extendable to types you don’t own What I would do in this case is, add some class overriding methods to Path.ID. (without making a separate factory, as it’s unnecessary) private static Class actualClass = ID; public static ID create(); public static void setClass( Class clz ); And then set adaptor with setClass, and in create use actualClass to create the instance. 56 queries. 0.344 seconds.
http://whiley.org/2012/02/29/a-problem-of-decoupling/
CC-MAIN-2014-52
refinedweb
978
59.7
Back to index The arguments for a query. More... import "nsIAbDirectoryQuery.idl"; The arguments for a query. Contains an expression for perform matches and an array of properties which should be returned if a match is found from the expression Definition at line 54 of file nsIAbDirectoryQuery.idl. The list of properties which should be returned if a match occurs on a card. Defines the boolean expression for the matching of cards. Definition at line 61 of file nsIAbDirectoryQuery.idl. Defines if sub directories should be queried. Definition at line 68 of file nsIAbDirectoryQuery.idl. A parameter which can be used to pass in data specific to a particular type of addressbook. Definition at line 86 of file nsIAbDirectoryQuery.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_ab_directory_query_arguments.html
CC-MAIN-2017-51
refinedweb
120
51.34
Latest version is always in Github [Open.NAT] If you've developed some kind of server like a media server, file storage server, instant messages server or any other then surely you already know what is all this about NAT traversal and about the different techniques to make your computer reachable from outside and their pros and cons. One of those techniques consist in opening a port in your router and in specifying that all incoming traffic through that port must be forwarded to you computer IP:Port pair. (port forwarding) To do this you need to deal with at least two protocols, SSDP and also UPNP or PMP. That isn't easy at all, their specs are full of details and each router implements the protocols in different ways. Testing is also a nightmare, what works perfectly fine in your router, doesn't work in somebody else's. Open.NAT is a lightweight and easy-to-use class library to do port forwarding in NAT devices (Network Address Translator) that support Universal Plug and Play (UPNP) and/or Port Mapping Protocol (PMP). It is written in C# and works for .NET and Mono. At its beginning, internet was a network that allowed to every computer communicate with any other computer directly, the only thing you needed to know in order to achieve it, was those other computers' IP addresses. Those must have been happy days, however, it seems nobody expected such a success simply because internet wasn't designed for being used by million and million people; instead, it was designed for being used by the Advanced Research Projects Agency's projects (U.S. Department of Defense) at universities and research laboratories in the US. That's why IPv4 addresses are 32 bits numbers, because it provides a 2^32 address space ( 4,294,967,296 unique addresses), a big enough space for that time. Later, Internet started to be massively adopted but, even when everything continued alright for a short while, it was self evident that soon there wouldn't be enough IP addresses for everybody, that's the reason for the development of its successor protocol, IPv6. IPv6 solves the IPs availability problem however, for when it appeared, there was all a huge world-wide IPv4 infrastructure; not just hardware equipment but also software involved. The transition from IPv4 to IPv6 is still ongoing, in fact, vast part of the world's countries still have IPv4. Moreover, shortage of IPv4 addresses made them expensive. Just imagine you have 10,000 computers and need to pay for 10,000 IPs! That was a real problem. The solution were NATs. The basic idea is that only one computer/device owns an IP address (which also means it is reachable from outside) and the rest of the computers are behind. When one of the computers needs to send data, it does it through the NAT device, like a middleman who performs actions by you and then comes back with the results. Now, even though that's a solution in one sense, it is also a problem given that only works well for outgoing connections, that means you cannot longer host a server on your personal computer because it is not visible from outside, only the NAT is. We have to take into account that there are NATs at different levels, for example, all computers in my office are behind the office's NAT, the computers in my city are behind a carrier's NAT, and so on, we can see how the computer-to-computer original design has "evolved" to a hierarchical one, where you have access to Google, Facebook, The Guardian and other big companies' services but our mothers cannot have access to services we host in our home computers with the same easiness. You can install Open.NAT using Nuget Install-Package Open.NAT Install-Package Open.NAT Or just downloading the code from its Github repo here and including it as part of your solution. Once its done you are ready to use it. Here we have a typical scenario where we need to know our external IP address (the NAT device's IP) and create a port mapping for TCP protocol external_ip:1700 --> host_machine:1600. var discoverer = new NatDiscoverer(); // using SSDP protocol, it discovers NAT device. var device = await discoverer.DiscoverDeviceAsync(); // display the NAT's IP address Console.WriteLine("The external IP Address is: {0} ", await device.GetExternalIPAsync()); // create a new mapping in the router [external_ip:1702 -> host_machine:1602] await device.CreatePortMapAsync(new Mapping(Protocol.Tcp, 1602, 1702, "For testing")); // configure a TCP socket listening on port 1602 var endPoint = new IPEndPoint(IPAddress.Any, 1602); var socket = new Socket(endPoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp); socket.SetIPProtectionLevel(IPProtectionLevel.Unrestricted); socket.Bind(endPoint); socket.Listen(4); After running this we can verify that the port was open. I did it using online tool. Note how even when our socket listens on port 1602, we got access through 1702 And the following code lets you list the existing mappings in the router: var nat = new NatDiscoverer(); // we don't want to discover forever, just 5 senconds or less var cts = new CancellationTokenSource(5000); // we are only interested in Upnp NATs because PMP protocol doesn't allow to list mappings var device = await nat.DiscoverDeviceAsync(PortMapper.Upnp, cts); foreach (var mapping in await device.GetAllMappingsAsync()) { Console.WriteLine(mapping); } Lets delete some mappings: var nat = new NatDiscoverer(); var cts = new CancellationTokenSource(5000); var device = await nat.DiscoverDeviceAsync(PortMapper.Upnp, cts); foreach (var mapping in await device.GetAllMappingsAsync()) { // in this example we want to delete the "Skype" mappings if(mapping.Description.Contains("Skype")) { Console.WriteLine("Deleting {0}", mapping); await device.DeletePortMapAsync(mapping); } } How to handle exceptions? What if the specified mapping already exists for other service? Well, the following code provides a clue: try { var nat = new NatDiscoverer(); var cts = new CancellationTokenSource(5000); var device = await nat.DiscoverDeviceAsync(PortMapper.Upnp, cts); await device.CreatePortMapAsync(new Mapping(Protocol.Tcp, 1600, 1700, "The mapping name")); } catch(NatDeviceNotFoundException e) { Console.WriteLine("Open.NAT wasn't able to find an Upnp device ;(") } catch(MappingException me) { switch(me.ErrorCode) { case 718: Console.WriteLine("The external port already in use."); break; case 728: Console.WriteLine("The router's mapping table is full."); break; ....... .... .. } } Of course, in case of errors, you can do something smarter than display the problem in console, for example, when external port is already in use you could try with other port number. Open.NAT was born as a fork of Mono.Nat so, comparisons are unavoidable. First of all is important to state that there isn't anything wrong with Mono.Nat, it is a very very good library. The main motivation for Open.NAT was make it work in my routers and once I did it, I continued adding features, refactoring and updating the code in order to use some benefits offered by .NET 4.5 and C# 5, as well as changing the discovery process nature. Here you have a list of some of the changes: Open.NAT is a different library with different ideas and features so, it had to have a different name but not very different. The problem with Mono.Nat is that it is not a part of mono itself, it just happens to be in the Mono.Nat namespace and for that reason people ssume it's part of the mono framework. Probably Open.NAT is not a better name but it resembles Mono.Nat and plays with the idea that it is an open source to open NATs There are projects that are interested only in Upnp devices and don't want to discover PMP NATs. Others, only want to discover and handle PMP NATs instead of UPNP ones. Finally, there are others which want to discover both. This is possible with Open.NAT. Applications that add portmappings should remove them when they are no longer required or remove them as part of the shutdown steps. By default, Open.NAT keeps track of the added portmappings in order to release them automatically. Developers don't need to take care of this kind of details in their projects. Anyway, a ReleaseAll method is provided just in case a deterministic portmappings release is required. This doesn't mean you cannot create permanet portmappings (mappings that never expire), you can do it. Developers don't know how long a portmapping can be required in advance and for that reason they tend to create Permanet portmappings (a mapping that never expires). Given that an application can finish unexpectedly (because someone unplug the computer, for example) permanent mappings remain opened and that is not okay in many cases. Open.NAT allows developers to specify a mapping lifetime and it renews the mapping automatically before the expiration so if someone unplug the computer the NAT will release the portmapping after a certain time. If after some time a NAT is not discovered, you can be sure there isn't any NAT available for Open.NAT. This can be because the NAT doesn't support Upnp nor Pmp (or they are not enabled) or because there is simply no NATs. Open.NAT allows to specify a discovery timeout in order to stop the discovery process and let the host application know that no NATs were found. Open.NAT doesn't make use of a long run thread in order to being continuously discovering NAT devices, like Mono.Nat does it. This isn't better nor worse, it is just different. The idea behind this change is to avoid resources consumption (thread and bandwidth). If you think is a good idea to discover devices all the time, in fact sometimes it is, you are free to do it with your own thread, timer or the cheapest resource you count on. Performance, performance and ... performance The discovery process in Mono.Nat is a very time consuming task because it asks all devices in the LAN for all the services they support. It generates a lot of network traffic and then, it has to process all the responses. Open.NAT only asks for those services capable to map ports, improving drastically the discovery process. When it comes to UPnP, different routers support different kind of mappings, some of them only support permanent mappings, others require the same external and internal port number, others only support wildcard in the remote host, etc. Oh! and there are a few that have more than just one of these restrictions. Open.NAT does its best to deal with routers that have restrictions of that type. For example, if you need to map a port for 10 minutes and the router only supports permanent mappings, Open.NAT creates a permanent mapping and also takes care of release it when application exits. Open.NAT works not only for NATs supporting WANIPConnection service type but also WANPPPConnection service type. To do this, it has to deal with some inexpensive ADSL modems that respond with wrong service type and that used to support only WANPPPConnection. All the operations with Open.NAT are asynchronous because it is the real nature of operations and developers don't need to wait for an operation blocking the thread, they can do a lot of useful stuff instead of being waiting. Remember, these are client-server operations. Developers who want synchronous operations just need to wait for the operation completion. If developers find problems and need support, they can enable the code tracing in Verbose (or All) level and get the full detail about what is going on inside, with the requests/responses, warnings, errors, stacktraces, etc. They can also use the Open.Nat.ConsoleTest project to play with Open.NAT and reproduce problems. You! Of course you need it and for that reason Open.NAT has a wiki page with the API reference, Code examples, Troubleshooting, Errors, Warnings and an always improving Home page. (Disclaimer: documentation for version 2 is still in progress) I am working hard to make Open.NAT the best library in its type and to achieve that goal getting feedback and giving support are on top of the priority list. You will never feel alone Open.NAT uses .NET Framework 4.5 and thanks to that its code is a lot clearer and easy to read, maintain and debug. It also includes several new advantages and fixes. However, that is not good news if your project uses a older version of .NET. The Open.NAT solution includes the Open.NAT.ConsoleTest project that can be used for Troubleshooting. Just download the code and run it. Your IP: 181.110.171.21 Added mapping: 181.110.171.21:1700 -> 127.0.0.1:1600Mapping List +------+-------------------------------+--------------------------------+----------------------------------+ | PROT | PUBLIC (Reacheable) | PRIVATE (Your computer) | Descriptopn | +------+----------------------+--------+-----------------------+--------+----------------------------------+ | | IP Address | Port | IP Address | Port | | +------+----------------------+--------+-----------------------+--------+----------------------------------+ | TCP | 181.110.171.21 | 21807 | 10.0.0.5 | 32400 | Plex Media Server | | UDP | 181.110.171.21 | 25911 | 10.0.0.6 | 25911 | Skype UDP at 10.0.0.6:25911 (2693)| | TCP | 181.110.171.21 | 25911 | 10.0.0.6 | 25911 | Skype TCP at 10.0.0.6:25911 (2693)| | TCP | 181.110.171.21 | 1700 | 10.0.0.6 | 1600 | Open.Nat Testing | +------+----------------------+--------+-----------------------+--------+----------------------------------+ [Removing TCP mapping] 181.110.171.21:1700 -> 127.0.0.1:1600 [Done] [SUCCESS]: Test mapping effectively removed ;) Press any kay to exit... Open.NAT provides its own TraceSource in order to enable applications to trace the execution of the library's code and find issues. To enable the tracing you can add the following two lines of code: NatDiscoverer.TraceSource.Switch.Level = SourceLevels.Verbose; NatDiscoverer.TraceSource.Listeners.Add(new ConsoleListener()); The first line specifies the tracing level to use and the levels used by Open.NAT are: Verbose, Error, Warning and Information. You should set it to Information first and switch to Verbose if errors occur. The second line adds a trace listener and you can choose the one that is better for you, some of the available listeners are: Here we can see an real trace output: OpenNat - Information > Initializing OpenNat - Information > StartDiscovery OpenNat - Information > Searching OpenNat - Information > Searching for: UpnpSearcher OpenNat - Information > UPnP Response: Router advertised a 'WANPPPConnection:1' service!!! OpenNat - Information > Found device at: OpenNat - Information > 10.0.0.2:5431: Fetching service list OpenNat - Information > 10.0.0.2:5431: Parsed services list OpenNat - Information > 10.0.0.2:5431: Found service: urn:schemas-upnp-org:service:Layer3Forwarding:1 OpenNat - Information > 10.0.0.2:5431: Found service: urn:schemas-upnp-org:service:WANCommonInterfaceConfig:1 OpenNat - Information > 10.0.0.2:5431: Found service: urn:schemas-upnp-org:service:WANPPPConnection:1 OpenNat - Information > 10.0.0.2:5431: Found upnp service at: /uuid:0000e068-20a0-00e0-20a0-48a802086048/WANPPPConnection:1 OpenNat - Information > 10.0.0.2:5431: Handshake Complete OpenNat - Information > UpnpNatDevice device found. OpenNat - Information > ---------------------VVV EndPoint: 10.0.0.2:5431 Control Url: Service Description Url: Service Type: urn:schemas-upnp-org:service:WANPPPConnection:1 Last Seen: 15/05/2014 10:43:23 p.m. It got it!! The external IP Address is: 186.108.237.5 OpenNat - Information > UPnP Response: Router advertised a 'WANPPPConnection:1' service!!! OpenNat - Information > Found device at: OpenNat - Information > Already found - Ignored CodeProjects has several articles about NAT portforwarding that we should take a look at: NAT Traversal with UPnP in C# is an excellent article and the author says a great truth: Quote: Setup up Port Forwarding on your local router is very convenient for users of your networking application and you should automatically set it up for them. This can be done with Open.NAT, and also a big headache is avoided. This article, along with any associated source code and files, is licensed under The MIT License var device = await discoverer.DiscoverDeviceAsync(); var device = await discoverer.DiscoverDeviceAsync(PortMapper.Upnp, cts); System.ArgumentOutOfRangeException was unhandled by user code HResult=-2146233086 Message=Specified argument was out of the range of valid values. Parameter name: poertMapper Source=Open.Nat ParamName=poertMapper StackTrace: at Open.Nat.Guard.IsTrue(Boolean exp, String paramName) at Open.Nat.NatDiscoverer.<DiscoverDeviceAsync>d_.Nat.NatDiscoverer.<DiscoverDeviceAsyncNATTest.Program.<Do1>d__0.MoveNext() in c:\Users\Me\Documents\Visual Studio 2013\Projects\CPTests\OpenNATTest\Program.cs:line 23 InnerException: Quote:what are the advantages of your lib? Quote:Is port forwarding needed for client applications that connect directly to a remote socket server? Quote:Would port forwarding be able to improve socket connections behind corporate firewalls / ISPs such that applications can communicate with otherwise blocked servers? General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/807861/Open-NAT-A-NAT-Traversal-library-for-NET-and-Mono
CC-MAIN-2015-40
refinedweb
2,781
56.96
The Functional Style Part 8: Persistent data structures. We have discussed immutability at length by now. In particular, we have covered how loops can be replaced with recursive function calls to do iteration while avoiding reassignment of any variables. While it might seem on the face of it that this technique would be grossly inefficient in terms of memory, we have seen how tail call elimination can negate the need for extra subroutine calls, thus avoiding growing the call stack, and making functional algorithms essentially identical to their imperative equivalents at the machine level. So much for scalar values, but what about complex data structures like arrays and dictionaries? In functional languages they are immutable too. Therefore, if we have a list in Clojure: (def a-list (list 1 2 3)) and we construct a new list by adding an element to it: (def another-list (cons 0 a-list)) then we now have two lists, one equal to (1 2 3) and the other (0 1 2 3). Does this mean that, in order to modify a data structure in the functional style, we have to make a whole copy of the original, in order to preserve it unchanged? That would seem grossly inefficient. However, there are ways of making data structures appear to be modifiable while preserving the original intact for any parts of the program that still hold references to it. Such data structures are said to be ‘persistent’, in contrast with ‘ephemeral’ data structures, which are mutable. Data structures are fully persistent when every version of the structure can be modified, which is the type we will discuss here. Structures are partially persistent when only the most recent version can be modified. Fully stacked. The stack is a very good example for how to implement a persistent data structure, because it also shows a side benefit of this technique. The code happens to get simpler too; we will come to that in a moment. Because we are creating a functional stack here, our interface looks like this: public interface Stack<T> { Stack<T> pop(); Optional<T> top(); Stack<T> push(T top); } Given that this is a persistent data structure, we can’t ever modify it, so on pushing or popping we get a new Stack instance that reflects our pushed or popped stack. Any parts of the program that are holding references to previous states of the stack will continue to see that state unchanged. For obtaining a brand new stack, we will provide a static factory method: public interface Stack<T> { ... // pop, top, push static <T> Stack<T> create() { return new EmptyStack<T>(); } } It might seem a strange design choice to create a specific implementation for an empty stack, but when we do it works out very tidily. This is the benefit I mentioned earlier: public class EmptyStack<T> implements Stack<T> { @Override public Stack<T> pop() { throw new IllegalStateException(); } @Override public Optional<T> top() { return Optional.empty(); } @Override public Stack<T> push(T top) { return new NonEmptyStack<>(top, this); } } As you can see, top returns empty and pop throws an illegal state exception. I think it is reasonable to throw an exception in this case, because given the last in first out nature of stacks and their typical uses, any attempt to pop an empty stack indicates a probable bug in the program rather than a user error. As for the non-empty implementation, that looks like this: public class NonEmptyStack<T> implements Stack<T> { private final T top; private final Stack<T> popped; public NonEmptyStack(T top, Stack<T> popped) { this.top = top; this.popped = popped; } @Override public Stack<T> pop() { return popped; } @Override public Optional<T> top() { return Optional.of(top); } @Override public Stack<T> push(T top) { return new NonEmptyStack<>(top, this); } } It’s worth noting that choosing separate implementations for the empty and non-empty cases avoided the need for any conditional logic. There are no if statements in the above code. In use, the stack behaves like this: Stack.createreturns an EmptyStackinstance. - Pushing on it returns a NonEmptyStackwith the pushed value as its top. - When another value is pushed on top of the non-empty stack, another NonEmptyStack instance is created with the newly pushed value on top: The timeline here runs from left to right, and the ovals at the bottom of the diagram represent the ‘view’ the client sees of the stack. The regions bounded by dashed lines indicate that all the boxes within are all the same object instance. The direction of the arrows show that each NonEmptyStack instance holds a reference to another stack instance, its parent, which will either be empty or non-empty. This parent object reference is what will be returned when the stack is popped, and that is where things get clever: On popping the stack, the client simply shifts its view to the previously pushed element. Nothing gets deleted. This means that, if there were two clients with a view of the same stack, one client could pop it without affecting the other client’s view of the stack. The same is true of pushing: This is basically the same as the first diagram, except that we have dispensed with the horizontal dashed regions and instead made it explicit that the stack is a single data structure rather than several copies. We are simply representing each Stack instance, whether empty or not, as a single box. Initially the client sees an empty stack; on pushing, a non-empty stack instance is created which points at the empty stack, and the client shifts its view to the new instance. When the client pushes a second time, another non-empty stack instance is created which points at the previous non-empty stack, and the client shifts its view again. The stack tells the client where to point its view next, via the return value of the push and pop operations, but it is the client that actually shifts its view. The stack does not move anything. Now you might be thinking that "the client shifts its view" implies something is being mutated, and indeed it does, but this is simply a variable holding a reference to a Stack instance. We have discussed at length already in part 2 of the series how to manage changing scalar values without having to reassign variables. We already mentioned the possibility that different parts of the program might be holding separate and different views on the stack structure, so now let’s explicitly imagine that the three ovals represent three different clients’ views of one single stack structure. Client 1 is looking at a newly created stack, client 2 has pushed once on it, and client 3 has pushed twice on it: If client 2 subsequently pushed something on the stack, the effect would look like this: Notice the directions of the arrows ensure that neither client 1 or client 3 are affected by what client 2 did: client 3 cannot follow the arrow backwards to see the value that was just pushed, and nor can client 1. Similarly if client 1 pushed on the stack neither of the others would be affected by that either: The other thing to note about this data structure is that nothing is duplicated. All three clients share the same EmptyStack instance, and clients 2 and 3 also share the NonEmptyStack that was pushed first. Everything that could be shared is shared. Nothing is copied whenever any of them push, and popping does not cause any links in the structure to be broken. So when do things get deleted? Eventually we must reclaim resources or our program may run out of memory. If a stack element has been popped, and no part of the program is holding a reference to it any more, in time it will be reclaimed by the garbage collector. Indeed, garbage collection is an essential feature for functional programming in any language. Cons cells, CAR and CDR. This stack structure might seem familiar to you. If it does, it's for good reason. This structure is known as a linked list and it is a foundational data structure in computer science. Usually linked lists are represented by a diagram something like this: The list is a chain of elements, and each element contains a pair of pointers. One of the two pointers points to a value. The other one points at the next element in the chain, except for the final element in the chain which does not point to another element. In this way a chain of values can be linked together. One advantage usually cited for this kind of data structure, in imperative programming, is that it is very cheap to insert a value in the middle of a list: all you need to do is create the new element, link it to the following element, and re-point the preceding element in the list to the new element. The elements of a linked list do not have to be contiguous in memory, nor do they need to be stored in order, unlike an array. An array would require shuffling all the elements down after the inserted element in order to make room for it, which could be a very expensive operation indeed. On the other hand, linked lists perform poorly for random access, which is an O(n) operation. This is because to find the nth element you must traverse the preceding (n - 1) elements, unlike an array where you can access any element in constant O(1) time. Linked lists are an essential data structure in functional programming. The Lisp programming language is literally built out of them. In Lisp, a single element of a linked list is referred to as a cons cell: The CAR pointer points to the value of the cons cell while the CDR pointer points to the next element in the list. CAR and CDR are archaic terms which are not in general use any more, but I mention them for historical interest, and perhaps you might come across them. The Lisp programming language was first implemented on an IBM 704 mainframe, and the implementers found it convenient to store a cons cell in a machine word. The pointer to the cell value was stored in the “address” part of the word, while the pointer to the next cell was stored in the “decrement” part. It was convenient because the machine had instructions that could be used to access both of these values directly, when the cell was loaded into a register. Hence, contents of the address part of the register and contents of the decrement part of the register or CAR and CDR for short. This nomenclature made it into the language; Lisp used car as the keyword for returning the first element of a list, and cdr for returning the rest of the list. Nowadays, Clojure uses first and rest for these instead, which is much more transparent: it is hardly appropriate to name fundamental language operations after the architecture of a computer from the 1950s. Other languages might refer to them as head and tail instead. The creation of a new cons cell is referred to as cons-ing and it therefore means to create a list by prepending an element at the beginning of another list: user => (cons 0 (list 1 2 3)) (0 1 2 3) Just as we saw in the stack example, cons-ing an element onto a list does not alter the list for any other part of the program that is still using it. Binary trees. That’s all fine, but what about if we want to insert a value in a list, or append it to the end? In this case duplication is necessary; we will have to duplicate all the elements up to the point in the list where the new element is to be inserted. In the worst case - appending an element to the end - we will be forced to duplicate the entire list. Another approach is to use a binary tree instead of a linked list. This is an ordered data structure in which every element has zero, one or two pointers to other elements in the structure: one points to an element whose value is considered lower than the current element, and the other points to an element whose value is considered greater, by whatever comparison is appropriate for the type of data held in the tree: A binary tree can be searched considerably more efficiently than a linked list, but it needs to be balanced for optimum performance. To be optimal, the top element of the tree must be the median of all the values in the tree, and the same must be true for the top elements of all the sub-trees as well. In the worst case, when a binary tree becomes completely skewed down either side, it becomes indistinguishable from a linked list. The structure t above holds a tree of elements that are ordered alphabetically: A, B, C, D, F, G, H. Notice that E is missing. Following the arrows down, elements to the left have lower value than elements to the right, so you can easily traverse the tree to find a value by comparing the value with each element in turn and following the tree down+left or down+right accordingly. Such data structures are therefore good for searching, and a generalised variant called a B-tree is commonly used for indexing database tables. Now let us imagine that we want to insert the missing value E into this tree, which might look in code like this: t' = t.insert(E) As before, we want this insertion operation to leave the original tree t unmodified, while at the same time we would like to reuse as much of t as possible in order to minimise duplication. The result looks like this: To achieve the insertion of E it has been necessary to duplicate D, G, F, while A, B, C are shared between the two data structures, but the effect is that, following the arrows from t the original data structure is unchanged, while following the arrows from t' we see a data structure that now also includes E in the proper position. One direction. The linked list and the binary tree have one vital thing in common: both are examples of directed acyclic graphs. If you haven’t heard this term before, don’t be dismayed, because it’s very simple. A graph is a collection of things (nodes, points, vertices, whatever) that have connections between them: The graph is directed when the connections only go one way: Finally, the graph is acyclic when there are no cycles, that is to say, it is impossible to follow the graph from any point and return back at that same point: It is the directed acyclic nature of these structures - that the connections can be followed in one direction only and never loop back on themselves - that make it possible for us to ‘bolt on’ additional structure to give the appearance of modification or copying, while leaving unaffected any parts of the program that are still looking at the original version of the structure. Don’t panic! I’m hoping that my explanations here have made some sense, but if not, don’t worry too much. It’s not necessary to implement data structures like these when you do functional programming - functional and hybrid-functional languages have immutable data structures built in which work just fine, very probably better than anything most of us could write. My reason for including this section in the series is partly technical interest - I like to know how things work - and partly to allay concerns about efficiency. Immutable data structures do not mean making wholesale copies of entire structures every time something needs to be modified; much more efficient methods exist for doing this. Next time. This concludes our introduction to the fascinating subject of functional programming. I hope it’s been useful. In the next article, we will finish up with some final thoughts. We will discuss the declarative nature of the functional style, and whether functional and object-oriented programming styles can live together. We will briefly look at one of the benefits of functional programming that I did not really go into much in this series - ease of concurrency - and consider the tension between the functional style and efficiency concerns.
https://functional.works-hub.com/learn/the-functional-style-part-8-persistent-data-structures-88253
CC-MAIN-2019-51
refinedweb
2,753
53.95
There are two managed providers currently available with ADO.NET: the SQL Server Managed Provider and OLE DB Managed Provider. The previous example used the SQL Server Managed Provider, which is optimized for SQL Server and is restricted to working with SQL Server databases. The more general solution is the OLE DB Managed Provider, which will connect to any OLE DB provider, including Access. You can rewrite Example 14-1 to work with the Northwind database using Access rather than SQL Server with just a few small changes. First, you need to change the connection string: string connectionString = "provider=Microsoft.JET.OLEDB.4.0; " + "data source = c:\\nwind.mdb"; This query connects to the Northwind database on the C drive. (Your exact path might be different.) Next, change the DataAdapter object to an OLEDBDataAdapter rather than a SqlDataAdapter: OleDbDataAdapter DataAdapter = new OleDbDataAdapter (commandString, connectionString); Also be sure to add a using statement for the OleDb namespace: using System.Data.OleDb; This design pattern continues throughout the two Managed Providers; for every object whose class name begins with "Sql," there is a corresponding class beginning with "OleDb." Example 14-2 illustrates the complete OLE DB version of Example 14-1. using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using System.Data.OleDb; namespace ProgrammingCSharpWinForm { public class ADOForm1 : System.Windows.Forms.Form { private System.ComponentModel.Container components; private System.Windows.Forms.ListBox lbCustomers; public ADOForm1( ) { InitializeComponent( ); // connect to Northwind Access database string connectionString = "provider=Microsoft.JET.OLEDB.4.0; " + "data source = c:\\nwind.mdb"; // get records from the customers table string commandString = "Select CompanyName, ContactName from Customers"; // create the data set command object // and the DataSet OleDbDataAdapter DataAdapter = new OleDbData( )); } } } The output from this is identical to that from the previous example, as shown in Figure 14-2. The OLE DB Managed Provider is more general than the SQL Managed Provider and can, in fact, be used to connect to SQL Server as well as to any other OLE DB object. Because the SQL Server Provider is optimized for SQL Server, it will be more efficient to use the SQL Server-specific provider when working with SQL Server. In time, any number of specialized managed providers will be available.
http://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+14.+Accessing+Data+with+ADO.NET/14.4+Using+OLE+DB+Managed+Providers/
CC-MAIN-2018-05
refinedweb
377
50.23
Try this one (just added #d+ after the IP address saving group): "(.*?) queries.*client (.*?)#d+: query: (.*?) IN" DEMO: >>> s = '09-Sep-2013 10:22:42.540 queries: info: client 10.12.12.66#39177: query: google.com IN AXFR -T (10.10.10.11) ' >>> re.search("(.*?) queries.*client (.*?)#d+: query: (.*?) IN", s).groups() ('09-Sep-2013 10:22:42.540', '10.12.12.66', 'google.com')... The getsockname() call on the socket returned by accept() will give you the address of the local end of the connection. The best way to determine the interface is probably just to match up the local IP address from getsockname() against the interface addresses. Because floating point numbers are by default of type double. To make it a float you append an F. You are getting error in the below assignment: float f = 3.4028235E38; because a double as more precision than a float. So, there is a possible loss of precision. I would have expected just the opposite as floating point literals are by default double and should be more precise. Let's check the binary representation of your number till double precision: 0x47EFFFFFE54DAFF8 = 01000111 11101111 11111111 11111111 11100101 01001101 10101111 11111000 Now since float is a single precision 32-bit floating point value. It can't store all the double values, which are double precision 64-bit floating point values. It looks like you're using the Shunting-yard algorithm, but there are a few things you're doing incorrectly. First of all, after the meat of the algorithm runs you still have to print out the remaining contents of the stack and check for mismatched parens. As the wiki article says: When there are no more tokens to read While there are still operator tokens in the stack: If the operator token on the top of the stack is a parenthesis, then there are mismatched parentheses. Pop the operator onto the output queue. This is easy enough to add to your code, just add something like this after the for loop: while(!isempty(&s)) { ch = pop(&s); if(ch == ')' || ch == '(') { printf(" Mismatched parens "); break; } printf("%c",ch); } B You pass the pointer of the sockaddr_in6 (type-casted) and the size of the sockaddr_in6 structure as arguments: struct sockaddr_in6 in6; socklen_t len6 = sizeof(in6); recvfrom(sock, buf, buflen, (struct sockaddr *) &in6, &len6); Since you pass in the correct length to the function, it will work. in main function add these three lines after for loop End.and remove top=POP(stack,top); inside for loop. while(top!=-1) top=POP(stack,top); printf(" "); modified main function: int main() { char string[15],ch; int top=-1,i; printf("Enter the infix operation: "); gets(string); fflush(stdin); for(i=0;string[i]!='';i++) { ch=string[i]; if( ((ch>='a') && (ch<='z'))||((ch>='A') &&(ch<='Z')) ) { printf("%c",string[i]); } else { ch=string[i]; top=PUSH(stack,top,ch); } } while(top!=-1) top=POP(stack,top); printf(" "); return 0; } Indent your code In the stack trace: at Test.convert(Test.java:21) Is this line: if(check(exp.charAt(i))>check(exp.charAt(i-1))) So I think you mean: if (check(exp.charAt(i)) > check(s1.peek())) Now "The method check(char) in the type Test is not applicable for the arguments (Object)" is raised, so parameterized your Stack. That is, change: Stack s1=new Stack(); To (in Java 7): Stack<Character> s1 = new Stack<>(); You can activate/configure postfix ability of *address_verification* (see also) to block unknown sender. I have used jsonschema before and it is exactly able to do what you want it to do. It also does exception based error reporting if you want, but you can also iterate through all validation errors found in the doc, I've written a short example program which uses your schema (see the Json Schema V3 Spec) and prints out all found errors. Edit: I've changed the script so it now uses a custom validator which allows you to roll your own validation, the code should be self explanatory. You may look up the jsonschema source for infos on extend and how validators are extended/coded. #!/usr/bin/env python2 from jsonschema import Draft3Validator from jsonschema.exceptions import ValidationError from jsonschema.validators import extend import json import sys schema = { "type": "object", "r Add the javax.validation dependency to the pom.xml, also make sure that you already don't have any jar linked to your application with the classes of this jar. <dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.0.0.GA</version> </dependency> Event Validation.Error is an RoutedEvent and its always raised once your validation returns false in a binding. <StackPanel Validation. <TextBox Text="{Binding PropertyName, ValidatesOnNotifyDataErrors=True, NotifyOnValidationError=True}" /> </StackPanel> Inside your code behind of MainWindow you will need something like this: public void OnError(object sender, ....) { .... } A similar question was asked here. The solution is to use a helper method as described by Kiff. I believe you will have to turn rtld debugging on in the source code in order for LD_DEBUG to have effect on FreeBSD. So, the short answer is -- no, LD_DEBUG does not do anything unless you rebuild runtime linker with -DDEBUG. That said, there's still a lot of useful info that runtime linker can produce. See rtld man page for the details: Some effort has been made for that. The compiler requirement is GCC 4.6. There's also some discussion about Qt5 and BSD on Qt Project website. They say the X11 version is compatible with BSD, but you must compile it from source. I suspect that the server process on the machine is not picking up right Java OPTS. Please try to find the process on the server and check the parameters being passed. In Linux, its usually be ps -ef | grep tomcat or ps -ef | grep java to find out the process and verify the JVM parameters. Edit 1: the is a sample output of the command to find process, which might indicate the java opt parameter values: local-vm-1 [5]:ps -ef | grep tomcat tomcat 4141 1 0 07:38 ? 00:01:33 /apps/mw/jdk/1.6.0.17-64bit/bin/java -Dnop -Xms1024m -Xmx1024m -server -DTC=testplatform -DWMC_ENV =test -XX:MaxNewSize=112m -XX:NewSize=112m -XX:SurvivorRatio=6 -XX:PermSize=256m -XX:MaxPermSize=256m -Dsun.net.inetaddr.ttl=0 -DLISTEN_ADDRESS=wsx -test-vm-dtcp-1.managed.com -Djavax.net.ssl. It's possible that VirtualBox is filtering the keys, since it's stateful as to whether or not the keyboard and mouse are bound to the VM or to the host and VBox's UI. You might be able to hack this by generating a mouse click event in the middle of the console display to make sure VBox has bound the keyboard to the VM, and then start the send keys. Also, Vbox has a scripting system. If you can't get this to work, maybe you could use that to get done what you need. You can use sendto if you are working with IPv6, see this example. Unfortunately this doesn't work with IPv4. As antiduh said, you can use libpcap to capture packets, provided you have access to /dev/bpf (which is usually restricted to root). You may try the packages under packages-current, later versions are available and probably compile on 6.3 Riken Current Package Repository . You are unable to compile on the older versions because you might need to update the libc++ libraries which are required by the older versions. It is a generated function by sys/tools/makeobjops.awk. Look at sys/kern/bus_if.m for the source. You can see the generated code in GENERIC/bus_if.h in your object directory after a kernel build. (or substitute your kernel name for GENERIC if you've changed it.) The function also has a man page. type: man 9 BUS_TEARDOWN_INTR To read the documentation. You need to tell gcc where to look for includes, and you need to tell it to link against sqlite's library, which is probably called libsqlite.so You're looking for something along the lines of gcc -I /usr/local/include -lsqlite test.c. Compile and link using the option -pthread. Note the missing "ell". Update: -pthread instructs all tools involved in creating a binary (pre-processor, compiler, linker) to take care that the application/library to be build runs as intended. (This obviously is only necessary if the source makes use of any member(s) of the pthread_*-family of functions.) Whereas -lpthread links a library called libpthread, nothing more and nothing less. The difference in detail is implementation specific. Note: if -pthread has been specified -lpthread is not necessary as well as not recommended to be specfied. I would say you're hosed, but there are a few things you can try: su to another user that is in the wheel group. Obviously, you need such a user ssh as root. This is unlikely to work, since by default you can't ssh with root use sudo if you have it installed and configured work with the virtual machine provider to give you a secure console (one that allows root login) Again, the first 3 are pretty unrealistic since they won't work using the "default" FreeBSD install. The last one is probably your best bet. The problem is that there is no FreeBSD library included in the snappy JAR file that comes with Cassandra. Install the archivers/snappy-java port, delete the snappy-java JAR file that came with Cassandra, and copy /usr/local/share/java/classes/snappy-java.jar into Cassandra's lib directory. The first step in investigating crashes is to obtain the stack from the core-dump. Obtaining core-dump of Apache may be tricky -- because they are often disabled by default. But it can be done. Now, the fact that it works on Windows is not, actually, proof, that it is not the php-code -- whatever the problem is, it may be triggered by the "just right" sort of racing between threads. The OS, the number of CPUs, and other factors all affect the results of a race... And finally, are you sure, you are using thread-safe (zts) version of PHP itself? Assuming, you want users to enter the shorter url this.domain.com/index and serve the php at this.domain.com/html/folder/index.php; add the following rules to the .htaccess at root /. Options +FollowSymLinks -MultiViews RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} ^this.domain.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/]+)/?$ html/folder/$1.php [L] I'm not sure what you mean by changing your domain to something else. this.domain.com can be changed to that.domain.com or addon-domain.com but would make sense only if their DNS point to the same server (to serve shared content). Otherwise, it's like an external redirect to some other site. RewriteCond %{HTTP_HOST} ^this.domain.com$ [NC] RewriteRule ^(.*)$ I guess printf works fine, but I prefer to use device_printf as it also prints the device name, and will be easier when looking through logs or dmesg output. Also leave multiple debug prints and check the log files on your system. Most logs for the device drivers are logged in /var/log/messages. But check other log files too. Are you running your code on a virtual machine? Some device drivers don't show up their device files in /dev if the OS is running on a virtual machine. You should probably run your OS on actual hardware for the device file to show up. As far as I know, you can't see the output in dmesg if you cannot find the corresponding device file in /dev but you may have luck with logs as I mentioned. The easiest way to debug is of course using the printf statements. Other than You must use RequiredString Validator instead of Required Validator. RequiredString is for text fields only, Required is for all the other fields <validators> <field name="name"> <field-validator <message>You must select a name</message> </field-validator> </field> </validators> Link to the official documentation RequiredStringValidator checks that a String field is non-null and has a length > 0. (i.e. it isn't ""). The "trim" parameter determines whether it will {@link String#trim() trim} the String before performing the length check. If unspecified, the String will be trimmed.. class SessionsController < ApplicationController def create @user = User.find_by_email(params[:session][:email].downcase) if @user && @user.authenticate(params[:session][:password]) redirect_back_or user_path(current_user) else flash.now[:error] = "Invalid email/password combination" @user = User.create(email: params[:session][:email], password: params[:session][:password]) @showerror = true render 'static_pages/home' end end end you should use instance variable @user,because in view @user is used when register failed. Assuming there are no special characters in the string (like $ or 0x): def OnValidateCheckSum(self, P, d): valid_hex_char = lambda c: c in 'abcdef0123456789' return (len(P) < 5) and (all(valid_hex_char(z) for z in P.lower())) In this case, I am certain the operator is exponentiation. 8 3 4 + - 3 8 2 / + * 2 $ 3 + is: 8 3 4 + - = 1 3 8 2 / + = 7 1 7 * = 7 7 2 $ = 49 49 3 + = 52. Or maybe 7 2 $ = 128 128 3 + = 131. Depends on how your instructor defined the operator. If vec1 and vec2 are vectors, they don't have increment operators. They're containers. You need to use iterators to traverse them. Something like: auto it1 = vec1.cbegin(), it2 = vec2.cbegin(); while ( prefix_length < 3 and it1!=vec1.cend() and it2!=vec2.cend() and equal(*it1++, *it2++) ) ++prefix_length; Take a look at php.ini file /etc/php5/apache2/php.ini on debian: [mail function] ; For Win32 only. ; SMTP = localhost ; smtp_port = 25 ; For Win32 only. ; ;sendmail_from = me@example.com Uncomment sendmail_from: and set it to your needs. Make sure you do: service apache2 restart or service httpd restart to change take effect. Hope it helps, Mirko Non-member overloads also use a dummy int parameter to distinguish them: friend void operator++(Number&); // prefix friend void operator++(Number&, int); // postfix Note that some people might expect these to emulate the behaviour of the built-in operator by returning, respectively, the new and old values: Number& operator++(Number& fst) { fst.number=fst.number+1; return fst; // reference to new value } Number operator++(Number& fst, int) { Number old = fst; ++fst; return old; // copy of old value } PHP runs on the server. onClick executes Javascript on the CLIENT machine. You can NOT directly invoke PHP functions via Javascript code, or vice versa. What you're doing can be accomplished with a simple form: <?php if ($_SERVER["REQUEST_METHOD"] == 'POST') { $to = $_POST['to']; $text = $_POST['text']; mail($to, .....); } ?> <form method="POST" action=""> <input type="text" name="to" /> <input type="text" name="text" /> <input type="submit" /> </form> There is no need to use Javascript at all. Check the settings in your /etc/postfix/main.cf file, specifically the setting for virtual_mailbox_domains. If your domain is in this line, but another server is the MX for your domain, then this would explain the problem - i.e. the postfix server thinks it's handling incoming mail for your domain, but the MX records say otherwise. Postfix and virtualmin both support alias "like" gmail with simple config. Just go to "System Information > Postfix Mail Server > General Options" (http(s)://<yourserver>:<vitualminport>/postfix/general.cgi). and type "address extensions", for quick search) and fill it with "+". Alternatively you can edit /etc/main.cf and place a line like recipient_delimiter = +
http://www.w3hello.com/questions/freebsd-postfix-python-policyd-spf-ip-addres-validation-error
CC-MAIN-2018-17
refinedweb
2,600
65.93
Python styleguide¶: E501`` to the offending line to skip it in syntax checks. Note Configuring your editor to display a line at 79th column helps a lot here and saves time. Note The line length rule also applies to non-Python source files, such as .zcml files, but is a bit more relaxed there. Note The rule explicitly does not apply to documentation .rst files. For .rst files including the package documentation but also README.rst, CHANGES.rst, and doctests, add a line break after each sentence. See the :doc:` documentation styleguide </about/contributing/documentation_styleguide>` for more information.. autopep8¶ Making old code pep8 compliant can be a lot of work. There is a tool that can automatically do some of this work for you: autopep8. This fixes various issues, for example fixing indentation to be a multiple of four. Just install it with pip and call it like this: pip install autopep8 autopep8 -i filename.py autopep8 -i -r directory It is best to first run autopep8 in the default non aggressive mode, which means it only does whitespace changes. To run this recursively on the current directory, changing files in place: autopep8 -i -r . Quickly check the changes and then commit them. WARNING: be very careful when running this in a skins directory, if you run it there at all. It will make changes to the top of the file like this, which completely breaks the skin script: -##parameters=policy_in='' +# parameters=policy_in='' With those safe changes out of the way, you can move on to a second, more aggresive round: autopep8 -i --aggressive -r . Check these changes more thoroughly. At the very least check if Plone can still start in the foreground and that there are no failures or errors in the tests. Not all changes are always safe. You can ignore some checks: autopep8 -i --ignore W690,E711,E721 --aggressive -r . This skips the following changes: W690: Fix various deprecated code (via lib2to3). (Can be bad for Python 2.4.) E721: Use isinstance() instead of comparing types directly. (There are uses of this in for example GenericSetup and plone.api that must not be fixed.) E711: Fix comparison with None. (This can break SQLAlchemy code.) You can check what would be changed by one specific code: autopep8 --diff --select E309 -r . Indentation¶ For Python files, we stick with the PEP 8 recommondation: Use 4 spaces per indentation level. or keyword arguments, like so: # GOOD print "{0} is not {1}".format(1, 2) print "{bar} is not {foo}".format(foo=1, bar.). We sort everything alphabetically case insensitive and insert one blank line between from foo import bar and import baz blocks. Conditional imports come last. Don’t use multi-line imports but import each identifier from a module in a separate line. Again, we do not distinguish between what is standard lib, external package or internal package in order to save time and avoid the hassle of explaining which is which. # GOOD from __future__ import division from Acquisition import aq_inner from datetime import datetime from datetime import timedelta from plone.api import portal from plone.api.exc import MissingParameterError from Products.CMFCore.interfaces import ISiteRoot from Products.CMFCore.WorkflowCore import WorkflowException import pkg_resources import random try: pkg_resources.get_distribution('plone.dexterity') except pkg_resources.DistributionNotFound: HAS_DEXTERITY = False else: HAS_DEXTERITY = True isort, a python tool to sort imports can be configured to sort exactly as described above. Add the following: [settings] force_alphabetical_sort = True force_single_line = True lines_after_imports = 2 line_length = 200 not_skip = __init__.py To either .isort.cfg or changing the header from [settings] to [isort] and putting it on setup.cfg. You can also use plone.recipe.codeanalysis with the flake8-isort plugin enabled to check for it..dev < 1.1.a1 < 1.1.a2 < 1.1.b < 1.1.rc') True dev and dev0 are treated as the same: >>> parse_version('1.1.dev') == parse_version('1.1.dev0') True Setuptools recommends to separate parts with a dot. The website about semantic versioning is also worth a read. Concrete Rules¶ Do not use tabs in Python code! Use spaces as indenting, 4 spaces for each level. We don’t “require” PEP8, but most people use it and it’s good for you. Indent properly, even in HTML. Never use a bare except. Anything like except: passwill likely be reverted instantly. Avoid tal:on-error, since this swallows exceptions. Don’t use hasattr()- this swallows exceptions, use getattr(foo, 'bar', None)instead. The problem with swallowed exceptions is not just poor error reporting. This can also mask ConflictErrors, which indicate that something has gone wrong at the ZODB level! Never put any HTML in Python code and return it as a string. There are exceptions, though. Do not acquire anything unless absolutely necessary, especially tools. For example, instead of using context.plone_utils, use: from Products.CMFCore.utils import getToolByName plone_utils = getToolByName(context, 'plone_utils') Do not put too much logic in ZPT (use Views instead!) Remember to add i18n tags in ZPTs and Python code.
https://docs.plone.org/develop/styleguide/python.html
CC-MAIN-2020-16
refinedweb
830
60.01
Tools to create simple and consistent interfaces to complicated and varied data sources. Project description - py2store - Quick peek - Use cases - Remove data access entropy - Get a key-value view of files - Other key-value views and tools - Graze - Grub - More examples - How it works - A few persisters you can use - Philosophical FAQs - Some links <small>Table of contents generated with markdown-toc</small> Note: The core of py2store has now been moved to dol, and many of the specialized data object layers moved to separate packages. py2store's functionality remains the same for now, forwarding to these packages. It's advised to use dol (and/or its specialized spin-off packages) directly when sufficient, though. py2store Storage CRUD how and where you want it. PyBay video about py2store. Install it (e.g. pip install py2store). List, read, write, and delete data in a structured data source/target, as if manipulating simple python builtins (dicts, lists), or through the interface you want to interact with, with configuration or physical particularities out of the way. Also, being able to change these particularities without having to change the business-logic code. If you're not a "read from top to bottom" kinda person, here are some tips: Quick peek will show you a simple example of how it looks and feels. Use cases will give you an idea of how py2store can be useful to you, if at all. The section with the best bang for the buck is probably remove (much of the) data access entropy. It will give you simple (but real) examples of how to use py2store tooling to bend your interface with data to your will. How it works will give you a sense of how it works. More examples will give you a taste of how you can adapt the three main aspects of storage (persistence, serialization, and indexing) to your needs. Quick peek Think of type of storage you want to use and just go ahead, like you're using a dict. Here's an example for local storage (you must you string keys only here). >>> from py2store import QuickStore >>> >>> store = QuickStore() # will print what (tmp) rootdir it is choosing >>> # Write something and then read it out again >>> store['foo'] = 'baz' >>> 'foo' in store # do you have the key 'foo' in your store? True >>> store['foo'] # what is the value for 'foo'? 'baz' >>> >>> # Okay, it behaves like a dict, but go have a look in your file system, >>> # and see that there is now a file in the rootdir, named 'foo'! >>> >>> # Write something more complicated >>> store['hello/world'] = [1, 'flew', {'over': 'a', "cuckoo's": map}] >>> stored_val = store['hello/world'] >>> stored_val == [1, 'flew', {'over': 'a', "cuckoo's": map}] # was it retrieved correctly? True >>> >>> # how many items do you have now? >>> assert len(store) >= 2 # can't be sure there were no elements before, so can't assert == 2 >>> >>> # delete the stuff you've written >>> del store['foo'] >>> del store['hello/world'] QuickStore will by default store things in local files, using pickle as the serializer. If a root directory is not specified, it will use a tmp directory it will create (the first time you try to store something) It will create any directories that need to be created to satisfy any/key/that/contains/slashes. Of course, everything is configurable. A list of stores for various uses py2store provides tools to create the dict-like interface to data you need. If you want to just use existing interfaces, build on it, or find examples of how to make such interfaces, check out the ever-growing list of py2store-using projects: - mongodol: For MongoDB - hear: Read/write audio data flexibly. - tabled: Data as pandas.DataFramefrom various sources - msword: Simple mapping view to docx (Word Doc) elements - sshdol: Remote (ssh) files access - haggle: Easily search, download, and use kaggle datasets. - pyckup: Grab data simply and define protocols for others to do the same. - hubcap: Dict-like interface to github. - graze: Cache the internet. - grub: A ridiculously simple search engine maker. Just for fun projects: - cult: Religious texts search engine. 18mn application of grub. - laugh: A (py2store-based) joke finder. Use cases Interfacing reads How many times did someone share some data with you in the form of a zip of some nested folders whose structure and naming choices are fascinatingly obscure? And how much time do you then spend to write code to interface with that freak of nature? Well, one of the intents of py2store is to make that easier to do. You still need to understand the structure of the data store and how to deserialize these datas into python objects you can manipulate. But with the proper tool, you shouldn't have to do much more than that. Changing where and how things are stored Ever have to switch where you persist things (say from file system to S3), or change the way key into your data, or the way that data is serialized? If you use py2store tools to separate the different storage concerns, it'll be quite easy to change, since change will be localized. And if you're dealing with code that was already written, with concerns all mixed up, py2store should still be able to help since you'll be able to more easily give the new system a facade that makes it look like the old one. All of this can also be applied to data bases as well, in-so-far as the CRUD operations you're using are covered by the base methods. Adapters: When the learning curve is in the way of learning Shinny new storage mechanisms (DBs etc.) are born constantly, and some folks start using them, and we are eventually lead to use them as well if we need to work with those folks' systems. And though we'd love to learn the wonderful new capabilities the new kid on the block has, sometimes we just don't have time for that. Wouldn't it be nice if someone wrote an adapter to the new system that had an interface we were familiar with? Talking to SQL as if it were mongo (or visa versa). Talking to S3 as if it were a file system. Now it's not a long term solution: If we're really going to be using the new system intensively, we should learn it. But when you just got to get stuff done, having a familiar facade to something new is a life saver. py2store would like to make it easier for you roll out an adapter to be able to talk to the new system in the way you are familiar with. Thinking about storage later, if ever You have a new project or need to write a new app. You'll need to store stuff and read stuff back. Stuff: Different kinds of resources that your app will need to function. Some people enjoy thinking of how to optimize that aspect. I don't. I'll leave it to the experts to do so when the time comes. Often though, the time is later, if ever. Few proof of concepts and MVPs ever make it to prod. So instead, I'd like to just get on with the business logic and write my program. So what I need is an easy way to get some minimal storage functionality. But when the time comes to optimize, I shouldn't have to change my code, but instead just change the way my DAO does things. What I need is py2store. Remove data access entropy Data comes from many different sources, organization, and formats. Data is needed in many different contexts, which comes with its own natural data organization and formats. In between both: A entropic mess of ad-hoc connections and annoying time-consuming and error prone boilerplate. py2store (and it's now many extensions) is there to mitigate this. The design gods say SOC, DRY, SOLID* and such. That's good design, yes. But it can take more work to achieve these principles. We'd like to make it easier to do it right than do it wrong. (*) Separation (Of) Concerns, Don't Repeat Yourself,)) We need to determine what are the most common operations we want to do on data, and decide on a common way to express these operations, no matter what the implementation details are. - get/read some data - set/write some data - list/see what data we have - filter - cache ... Looking at this, we see that the base operations for complex data systems such as data bases and file systems overlap significantly with the base operations on python (or any programming language) objects. So we'll reflect this in our choice of a common "language" for these operations. For examples, once projected to a py2store object, iterating over the contents of a data base, or over files, or over the elements of a python (iterable) object should look the same, in code. Achieving this, we achieve SOC, but also set ourselves up for tooling that can assume this consistency, therefore be DRY, and many of the SOLID principles of design. Also mentionable: So far, py2store core tools are all pure python -- no dependencies on anything else. Now, when you want to specialize a store (say talk to data bases, web services, acquire special formats (audio, etc.)), then you'll need to pull in a few helpful packages. But the core tooling is pure. Get a key-value view of files Let's get an object that gives you access to local files as if they were a dictionary (a Mapping). LocalBinaryStore: A base store for local files import os import py2store rootdir = os.path.dirname(py2store.__file__) rootdir '/Users/Thor.Whalen/Dropbox/dev/p3/proj/i/py2store/py2store' from py2store import LocalBinaryStore s = LocalBinaryStore(rootdir) len(s) 213 list(s)[:10] ['filesys.py', 'misc.py', 'mixins.py', 'test/trans_test.py', 'test/quick_test.py', 'test/util.py', 'test/__init__.py', 'test/__pycache__/simple_test.cpython-38.pyc', 'test/__pycache__/__init__.cpython-38.pyc', 'test/__pycache__/quick_test.cpython-38.pyc'] v = s['filesys.py'] type(v), len(v) (bytes, 9470) And really, it's an actual Mapping, so you can interact with it as you would with a dict. len(s) s.items() s.keys() s.values() 'filesys.py' in s True In fact more, it's a subclass of collections.abc.MutableMapping, so can write data to a key by doing this: s[key] = data and delete a key by doing del s[key] (We're not demoing this here because we don't want you to write stuff in py2store files, which we're using as a demo folder.) Also, note that by default py2store "persisters" (as these mutable mappings are called) have their clear() method removed to avoid mistakingly deleting a whole data base or file system. key filtering Say you only want .py files... from py2store import filt_iter s = filt_iter(s, filt=lambda k: k.endswith('.py')) len(s) 102 What's the value of a key? k = 'filesys.py' v = s[k] print(f"{type(v)=}, {len(v)=}") type(v)=<class 'bytes'>, len(v)=9470 value transformation (a.k.a. serialization and deserialization) For .py files, it makes sense to get data as text, not bytes. So let's tell our reader/store that's what we want... from py2store import wrap_kvs s = wrap_kvs(s, obj_of_data=lambda v: v.decode()) v = s[k] # let's get the value of that key again print(f"{type(v)=}, {len(v)=}") # and see what v is like now... type(v)=<class 'str'>, len(v)=9470 print(v[ key transformation That was "value transformation" (in many some cases, known as "(de)serialization"). And yes, if you were interested in transforming data on writes (a.k.a. serialization), you can specify that too. Often it's useful to transform keys too. Our current keys betray that a file system is under the hood; We have extensions ( .py) and file separators. That's not pure SOC. No problem, let's transform keys too, using tuples instead... s = wrap_kvs(s, key_of_id=lambda _id: tuple(_id[:-len('.py')].split(os.path.sep)), id_of_key=lambda k: k + '.py' if isinstance(k, str) else os.path.sep.join(k) + '.py' ) list(s)[:10] [('filesys',), ('misc',), ('mixins',), ('test', 'trans_test'), ('test', 'quick_test'), ('test', 'util'), ('test', '__init__'), ('test', 'local_files_test'), ('test', 'simple_test'), ('test', 'scrap')] Note that we made it so that when there's only one element, you can specify as string itself: both s['filesys'] or s[('filesys',)] are valid print(s['filesys'][ caching As of now, every time you iterate over keys, you ask the file system to list files, then filter them (to get only .py files). That's not a big deal for a few hundred files, but if you're dealing with lots of files you'll feel the slow-down (and your file system will feel it too). If you're not deleting or creating files in the root folder often (or don't care about freshness), your simplest solution is to cache the keys. The simplest would be to do this: from py2store import cached_keys s = cached_keys(s) Only, you won't really see the difference if we just do that (unless your rootdir has many many files). But cached_keys (as the other functions we've introduced above) has more too it, and we'll demo that here so you can actually observe a difference. cached_keys has a (keyword-only) argument called keys_cache that specifies what to cache the keys into (more specifically, what function to call on the first key iteration (when and if it happens)). The default is keys_cache. But say we wanted to always get our keys in sorted order. Well then... from py2store import cached_keys s = cached_keys(s, keys_cache=sorted) list(s)[:10] [('__init__',), ('access',), ('appendable',), ('base',), ('caching',), ('core',), ('dig',), ('errors',), ('examples', '__init__'), ('examples', 'code_navig')] Note that there's a lot more too caching. We'll just mention two useful things to remember here: You can use keys_cacheto specify a "precomputed/explicit" collection of keys to use in the store. This allows you to have full flexibility on defining sub-sets of stores. Here we talked about caching keys, but caching values is arguably more important. If it takes a long time to fetch remote data, you want to cache it locally. Further, if loading data from local storage to RAM is creating lag, you can cache in RAM. And you can do all this easily (and separate from the concern of both source and cache stores) using tools you an find in py2store.caching. Aggregating these transformations to be able to apply them to other situations (DRY!) from lined import Line # Line just makes a function by composing/chaining several functions from py2store import LocalBinaryStore, filt_iter, wrap_kvs, cached_keys key_filter_wrapper = filt_iter(filt=lambda k: k.endswith('.py')) key_and_value_wrapper = wrap_kvs( obj_of_data=lambda v: v.decode(), key_of_id=lambda _id: tuple(_id[:-len('.py')].split(os.path.sep)), id_of_key=lambda k: k + '.py' if isinstance(k, str) else os.path.sep.join(k) + '.py' ) caching_wrapper = cached_keys(keys_cache=sorted) # my_cls_wrapper is basically the pipeline: input -> key_filter_wrapper -> key_and_value_wrapper -> caching_wrapper my_cls_wrapper = Line(key_filter_wrapper, key_and_value_wrapper, caching_wrapper) @my_cls_wrapper class PyFilesReader(LocalBinaryStore): """Access to local .py files""" s = PyFilesReader(rootdir) len(s) 102 list(s)[:10] [('__init__',), ('access',), ('appendable',), ('base',), ('caching',), ('core',), ('dig',), ('errors',), ('examples', '__init__'), ('examples', 'code_navig')] print(s['caching'][:300]) """Tools to add caching layers to stores.""" from functools import wraps, partial from typing import Iterable, Union, Callable, Hashable, Any from py2store.trans import store_decorator ############################################################################################################### Other key-value views and tools Now that you've seen a few tools (key/value transformation, filtering and caching) you can use to change one mapping to another, what about getting a mapping (i.e. " dict-like") view of a data source in the first place? If you're advanced, you can just make your own by sub-classing KvReader or KvPersister, and adding the required __iter__ and __getitem__ methods (as well as __setitem__ and __delitem__ for KvPersister, if you want to be able to write/delete data too). But we (and others) are offer an ever growing slew of mapping views of all kinds of data sources. Here are a few you can check out: The classics (data bases and storage systems): from py2store import ( S3BinaryStore, # to talk to AWS S3 (uses boto) SQLAlchemyStore, # to talk to sql (uses alchemy) ) # from py2store.stores.mongo_store import MongoStore # moved to mongodol To access configs and customized store specifications: from py2store import ( myconfigs, mystores ) To access contents of zip files: from py2store import ( FilesOfZip, FlatZipFilesReader, ) To customize the format you want your data in (depending on the context... like a file extension): from py2store.misc import ( get_obj, MiscReaderMixin, MiscStoreMixin, MiscGetterAndSetter, ) To define string, tuple, or dict formats for keys, and move between them: from py2store.key_mappers.naming import StrTupleDict But probably the best way to learn the way of py2store is to see how easily powerful functionalities can be made with it. We'll demo a few of these now. Graze graze's jingle is "Cache the internet". That's (sort of) what it does. Graze is a mapping that uses urls as keys, pulling content from the internet and caching to local files. Quite simply: from graze import Graze g = Graze() list(g) # lists the urls you already have locally del g[url] # deletes that local file you have cached b = g[url] # gets the contents of the url (taken locally if there, or downloading from the internet (and caching locally) if not. Main use case: Include the data acquisition code in your usage code. Suppose you want to write some code that uses some data. You need that data to run the analyses. What do you do? - write some instructions on where and how to get the data, where to put it in the file system, and/or what config file or environment variable to tinker with to tell it where that data is, or... - use graze Since it's implemented as a mapping, you can easily transform it to do all kinds of things (namely, using py2store tools). Things like - getting your content in a more ready-to-use object than bytes, or - putting an expiry date on some cached items, so that it will automatically re-fresh the data The original code of Graze was effectively 57 lines (47 without imports). Check it out. That's because it it had to do is: - define url data fetching as internet[url] - define a local files (py2)store - connect both through caching logic - do some key mapping to get from url to local path and visa-versa And all those things are made easy with py2store. from graze import Graze g = Graze() # uses a default directory to store stuff, but is customizable len(g) # how many grazed files do we have? 52 sorted(g)[:3] # first (in sorted order) 3 keys ['', '', ''] Example using baby names data from io import BytesIO import pandas as pd from py2store import FilesOfZip # getting the raw data url = '' # this specifies both where to get the data from, and where to put it locally! b = g[url] print(f"b is an array of {len(b)} {type(b)} of a zip. We'll give these to FilesOfZip to be able to read them") # formatting it to be useful z = FilesOfZip(b) print(f"First 4 file names in the zip: {list(z)[:4]}") v = z['AK.TXT'] # bytes of that (zipped) file df = pd.read_csv(BytesIO(v), header=None) df.columns = ['state', 'gender', 'year', 'name', 'number'] df b is an array of 22148032 <class 'bytes'> of a zip. We'll give these to FilesOfZip to be able to read them First 4 file names in the zip: ['AK.TXT', 'AL.TXT', 'AR.TXT', 'AZ.TXT'] </style></style> .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } 28962 rows × 5 columns Example using emoji image urls data url = '' if url in g: # if we've cached this already del g[url] # remove it from cache assert url not in g import json d = json.loads(g[url].decode()) len(d) 1510 list(d)[330:340] ['couple_with_heart_woman_man', 'couple_with_heart_woman_woman', 'couplekiss_man_man', 'couplekiss_man_woman', 'couplekiss_woman_woman', 'cow', 'cow2', 'cowboy_hat_face', 'crab', 'crayon'] d['cow'] '' A little py2store exercise: A store to get image objects of emojis As a demo of py2store, let's make a store that allows you to get (displayable) image objects of emojis, taking care of downloading and caching the name:url information for you. from functools import cached_property import json from py2store import KvReader from graze import graze class EmojiUrls(KvReader): """A store of emoji urls. Will automatically download and cache emoji (name, url) map to a local file when first used.""" data_source_url = '' @cached_property def data(self): b = graze(self.data_source_url) # does the same thing as Graze()[url] return json.loads(b.decode()) def __iter__(self): yield from self.data def __getitem__(self, k): return self.data[k] # note, normally you would define an explicit __len__ and __contains__ to make these more efficient emojis = EmojiUrls() len(emojis), emojis['cow'] (1510, '') from IPython.display import Image import requests from py2store import wrap_kvs, add_ipython_key_completions @add_ipython_key_completions # this enables tab-completion of keys in jupyter notebooks @wrap_kvs(obj_of_data=lambda url: Image(requests.get(url).content)) class EmojiImages(EmojiUrls): """An emoji reader returning Image objects (displayable in jupyter notebooks)""" emojis = EmojiImages() len(emojis) 1510 emojis['cow'] Grub Quick and easy search engine of anything (that can be expressed as a key-value store of text). search your code # Make a store to search in (only requirements is that it provide text values) import os import py2store rootdir = os.path.dirname(py2store.__file__) store_to_search = LocalBinaryStore(os.path.join(rootdir) + '{}.py') # The '{}.py' is a short-hand of LocalBinaryStore to filter for .py files only # make a search object for that store from grub import SearchStore search = SearchStore(store_to_search) search('cache key-value pairs') array(['py2store/caching.py', 'py2store/utils/cumul_aggreg_write.py', 'py2store/trans.py', 'py2store/examples/write_caches.py', 'py2store/utils/cache_descriptors.py', 'py2store/utils/explicit.py', 'py2store/persisters/arangodb_w_pyarango.py', 'py2store/persisters/dynamodb_w_boto3.py', 'py2store/stores/delegation_stores.py', 'py2store/util.py'], dtype=object) search jokes (and download them automatically Some code that acquires and locally caches a joke data, makes a mapping view of it (here just a dict in memory), and builds a search engine to find jokes. All that, in a few lines. import json from graze.base import graze from grub import SearchStore # reddit jokes (194553 at the time of writing this) jokes_url = '' raw_data = json.loads(graze(jokes_url).decode()) joke_store = {x['id']: f"{x['title']}\n--> {x['body']}\n(score: {x['score']})" for x in raw_data} search_joke = SearchStore(joke_store) results_idx = search_joke('searching for something funny') print(joke_store[results_idx[0]]) # top joke (not by score, but by relevance to search terms) want to hear me say something funny? --> well alright then...."something funny" there (score: 0) More examples Looks like a dict Below, we make a default store and demo a few basic operations on it. The default store uses a dict as it's backend persister. A dict is neither really a backend, nor a persister. But it helps to try things out with no footprint. from py2store.base import Store s = Store() There's nothing fantastic in the above code. I've just demoed some operations on a dict. But it's exactly this simplicity that py2store aims for. You can now replace the s = Store() with s = AnotherStore(...) where AnotherStore now uses some other backend that could be remote or local, could be a database, or any system that can store something (the value) somewhere (the key). You can choose from an existing store (e.g. local files, for AWS S3, for MongoDB) or quite easily make your own (more on that later). And yet, it will still look like you're talking to a dict. This not only means that you can talk to various storage systems without having to actually learn how to, but also means that the same business logic code you've written can be reused with no modification. But py2store offers more than just a simple consistent facade to where you store things, but also provides means to define how you do it. In the case of key-value storage, the "how" is defined on the basis of the keys (how you reference) the objects you're storing and the values (how you serialize and deserialize those objects). Converting keys: Relative paths and absolute paths Take a look at the following example, that adds a layer of key conversion to a store. # defining the store from py2store.base import Store class PrefixedKeyStore(Store): prefix = '' def _id_of_key(self, key): return self.prefix + key # prepend prefix before passing on to store def _key_of_id(self, _id): if not _id.startswith(self.prefix): raise ValueError(f"_id {_id} wasn't prefixed with {self.prefix}") else: return _id[len(self.prefix):] # don't show the user the prefix # trying the store out s = PrefixedKeyStore() s.prefix = '/ROOT/' Q: That wasn't impressive! It's just the same as the first Store. What's this prefix all about? A: The prefix thing is hidden, and that's the point. You want to talk the "relative" (i.e "prefix-free") language, but may have the need for this prefix to be prepended to the key before persisting the data and that prefix to be removed before being displayed to the user. Think of working with files. Do you want to have to specify the root folder every time you store something or retrieve something? Q: Prove it! A: Okay, let's look under the hood at what the underlying store (a dict) is dealing with: assert list(s.store.items()) == [('/ROOT/foo', 'bar'), ('/ROOT/another', 'item')] You see? The keys that the "backend" is using are actually prefixed with "/ROOT/" Serialization/Deserialization Let's now demo serialization and deserialization. Say we want to deserialize any text we stored by appending "hello " to everything stored. # defining the store from py2store.base import Store class MyFunnyStore(Store): def _obj_of_data(self, data): return f'hello {data}' # trying the store out s = MyFunnyStore() assert list(s) == [] s['foo'] = 'bar' # put 'bar' in 'foo' assert 'foo' in s # check that 'foo' is in (i.e. a key of) s assert s['foo'] == 'hello bar' # the value that 'foo' contains SEEMS to be 'hello bar' assert list(s) == ['foo'] # list all the keys (there's only one) assert list(s.items()) == [('foo', 'hello bar')] # list all the (key, value) pairs assert list(s.values()) == ['hello bar'] # list all the values Note: This is an easy example to demo on-load transformation of data (i.e. deserialization), but wouldn't be considered "deserialization" by all. See the Should storage transform the data? discussion below. In the following, we want to serialize our text by upper-casing it (and see it as such) when we retrieve the text. # defining the store from py2store.base import Store class MyOtherFunnyStore(Store): def _data_of_obj(self, obj): return obj.upper() # trying the store out s = MyOtherFunnyStore() In the last to serialization examples, we only implemented one way transformations. That's all fine if you just want to have a writer (so only need a serializer) or a reader (so only need a deserializer). In most cases though, you will need two way transformations, specifying how the object should be serialized to be stored, and how it should be deserialized to get your object back. A pickle store Say you wanted the store to pickle as your serializer. Here's how this could look like. # defining the store import pickle from py2store.base import Store class PickleStore(Store): protocol = None fix_imports = True encoding = 'ASCII' def _data_of_obj(self, obj): # serializer return pickle.dumps(obj, protocol=self.protocol, fix_imports=self.fix_imports) def _obj_of_data(self, data): # deserializer return pickle.loads(data, fix_imports=self.fix_imports, encoding=self.encoding) # trying the store out s = PickleStore() assert list(s) == [] s['foo'] = 'bar' # put 'bar' in 'foo' assert s['foo'] == 'bar' # I can get 'bar' back # behind the scenes though, it's really a pickle that is stored: assert s.store['foo'] == b'\x80\x03X\x03\x00\x00\x00barq\x00.' Again, it doesn't seem that impressive that you can get back a string that you stored in a dict. For two reasons: (1) you don't really need to serialize strings to store them and (2) you don't need to serialize python objects to store them in a dict. But if you (1) were trying to store more complex types and (2) were actually persisting them in a file system or database, then you'll need to serialize. The point here is that the serialization and persisting concerns are separated from the storage and retrieval concern. The code still looks like you're working with a dict. But how do you change the persister? By using a persister that persists where you want. You can also write your own. All a persister needs to work with py2store is that it follows the interface python's collections.MutableMapping (or a subset thereof). More on how to make your own persister later You just need to follow the collections.MutableMapping interface. Below a simple example of how to persist in files under a given folder. (Warning: If you want a local file store, don't use this, but one of the easier to use, robust and safe stores in the stores folder!) import os from collections.abc import MutableMapping class SimpleFilePersister(MutableMapping): """Read/write (text or binary) data to files under a given rootdir. Keys must be absolute file paths. Paths that don't start with rootdir will be raise a KeyValidationError """ def __init__(self, rootdir, mode='t'): if not rootdir.endswith(os.path.sep): rootdir = rootdir + os.path.sep self.rootdir = rootdir assert mode in {'t', 'b', ''}, f"mode ({mode}) not valid: Must be 't' or 'b'" self.mode = mode def __getitem__(self, k): with open(k, 'r' + self.mode) as fp: data = fp.read() return data def __setitem__(self, k, v): with open(k, 'w' + self.mode) as fp: fp.write(v) def __delitem__(self, k): os.remove(k) def __contains__(self, k): """ Implementation of "k in self" check. Note: MutableMapping gives you this for free, using a try/except on __getitem__, but the following uses faster os functionality.""" return os.path.isfile(k) def __iter__(self): yield from filter(os.path.isfile, map(lambda x: os.path.join(self.rootdir, x), os.listdir(self.rootdir))) def __len__(self): """Note: There's system-specific faster ways to do this.""" count = 0 for _ in self.__iter__(): count += 1 return count def clear(self): """MutableMapping creates a 'delete all' functionality by default. Better disable it!""" raise NotImplementedError("If you really want to do that, loop on all keys and remove them one by one.") Now try this out: import os # What folder you want to use. Defaulting to the home folder. You can choose another place, but make sure rootdir = os.path.expanduser('~/') # Defaulting to the home folder. You can choose another place persister = SimpleFilePersister(rootdir) foo_fullpath = os.path.join(rootdir, 'foo') persister[foo_fullpath] = 'bar' # write 'bar' to a file named foo_fullpath assert persister[foo_fullpath] == 'bar' # see that you can read the contents of that file to get your 'bar' back assert foo_fullpath in persister # the full filepath indeed exists in (i.e. "is a key of") the persister assert foo_fullpath in list(persister) # you can list all the contents of the rootdir and file foo_fullpath in it Talk your own CRUD dialect Don't like this dict-like interface? Want to talk your own CRUD words? We got you covered! Just subclass SimpleFilePersister and make the changes you want to make: class MySimpleFilePersister(SimpleFilePersister): # If it's just renaming, it's easy read = SimpleFilePersister.__getitem__ exists = SimpleFilePersister.__contains__ n_files = SimpleFilePersister.__len__ # here we want a new method that gives us an actual list of the filepaths in the rootdir list_files = lambda self: list(self.__iter__()) # And for write we want val and key to be swapped in our interface, def write(self, val, key): # note that we wanted val to come first here (as with json.dump and pickle.dump interface) return self.__setitem__(key, val) my_persister = MySimpleFilePersister(rootdir) foo_fullpath = os.path.join(rootdir, 'foo1') my_persister.write('bar1', foo_fullpath) # write 'bar1' to a file named foo_fullpath assert my_persister.read(foo_fullpath) == 'bar1' # see that you can read the contents of that file to get your 'bar1' back assert my_persister.exists(foo_fullpath) # the full filepath indeed exists in (i.e. "is a key of") the persister assert foo_fullpath in my_persister.list_files() # you can list all the contents of the rootdir and file foo_fullpath in it Transforming keys But dealing with full paths can be annoying, and might couple code too tightly with a particular local system. We'd like to use relative paths instead. Easy: Wrap the persister in the PrefixedKeyStore defined earlier. s = PrefixedKeyStore(store=persister) # wrap your persister with the PrefixedKeyStore defined earlier if not rootdir.endswith(os.path.sep): rootdir = rootdir + os.path.sep # make sure the rootdir ends with slash s.prefix = rootdir # use rootdir as prefix in keys s['foo2'] = 'bar2' # write 'bar2' to a file assert s['foo2'] == 'bar2' # see that you can read the contents of that file to get your 'bar2' back assert 'foo2' in s assert 'foo2' in list(s) How it works py2store offers three aspects that you can define or modify to store things where you like and how you like it: - Persistence: Where things are actually stored (memory, files, DBs, etc.) - Serialization: Value transformaton. How python objects should be transformed before it is persisted, and how persisted data should be transformed into python objects. - Indexing: Key transformation. How you name/id/index your data. Full or relative paths. Unique combination of parameters (e.g. (country, city)). Etc. All of this allows you to do operations such as "store this (value) in there (persitence) as that (key)", moving the tedious particularities of the "in there" as well how the "this" and "that" are transformed to fit in there, all out of the way of the business logic code. The way it should be. Note: Where data is actually persisted just depends on what the base CRUD methods ( __getitem__, __setitem__, __delitem__, __iter__, etc.) define them to be. A few persisters you can use We'll go through a few basic persisters that are ready to use. There are more in each category, and we'll be adding new categories, but this should get you started. Here is a useful function to perform a basic test on a store, given a key and value. It doesn't test all store method (see test modules for that), but demos the basic functionality that pretty much every store should be able to do. def basic_test(store, k='foo', v='bar'): """ This test performs Warning: Don't use on a key k that you don't want to loose!""" if k in store: # deleting all docs in tmp del store[k] assert (k in store) == False # see that key is not in store (and testing __contains__) orig_length = len(store) # the length of the store before insertion store[k] = v # write v to k (testing __setitem__) assert store[k] == v # see that the value can be retrieved (testing __getitem__, and that __setitem__ worked) assert len(store) == orig_length + 1 # see that the number of items in the store increased by 1 assert (k in store) == True # see that key is in store now (and testing __contains__ again) assert k in list(store) # testing listing the (key) contents of a store (and seeing if ) assert store.get(k) == v # the get method _ = next(iter(store.keys())) # get the first key (test keys method) _ = next(iter(store.__iter__())) # get the first key (through __iter__) k in store.keys() # test that the __contains__ of store.keys() works try: _ = next(iter(store.values())) # get the first value (test values method) _ = next(iter(store.items())) # get the first (key, val) pair (test items method) except Exception: print("values() (therefore items()) didn't work: Probably testing a persister that had other data in it that your persister doesn't like") assert (k in store) == True # testing __contains__ again del store[k] # clean up (and test delete) Local Files There are many choices of local file stores according to what you're trying to do. One general (but not too general) purpose local file store is 'py2store.stores.local_store.RelativePathFormatStoreEnforcingFormat'. It can do a lot for you, like add a prefix to your keys (so you can talk in relative instead of absolute paths), lists all files in subdirectories as well recursively, only show you files that have a given pattern when you list them, and not allow you to write to a key that doesn't fit the pattern. Further, it also has what it takes to create parametrized paths or parse out the parameters of a path. from py2store.stores.local_store import RelativePathFormatStoreEnforcingFormat as LocalFileStore import os rootdir = os.path.expanduser('~/pystore_tests/') # or replace by the folder you want to use os.makedirs(rootdir, exist_ok=True) # this will make all directories that don't exist. Don't use if you don't want that. store = LocalFileStore(path_format=rootdir) basic_test(store, k='foo', v='bar') The signature of LocalFileStore is: LocalFileStore(path_format, mode='', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) Often path_format is just used to specify the rootdir, as above. But you can specify the desired format further. For example, the following will only yield .wav files, and only allow you to write to keys that end with .wav: store = LocalFileStore(path_format='/THE/ROOT/DIR/{}.wav') The following will additional add the restriction that those .wav files have the format 'SOMESTRING_' followed by digits: store = LocalFileStore(path_format='/THE/ROOT/DIR/{:s}_{:d}.wav') You get the point... The other arguments of LocalFileStore or more or less those of python's open function. The slight difference is that here the mode argument applies both to read and write. If mode='b' for example, the file will be opened with mode='rb' when opened to read and with mode='wb' when opened to write. For assymetrical read/write modes, the user can specify a read_mode and write_mode (in this case the mode argument is ignored). MongoDB A MongoDB collection is not as naturally a key-value storage as a file system is. MongoDB stores "documents", which are JSONs of data, having many (possibly nested) fields that are not by default enforced by a schema. So in order to talk to mongo as a key-value store, we need to specify what fields should be considered as keys, and what fields should be considered as data. By default, the _id field (the only field ensured by default to contain unique values) is the single key field, and all other fields are considered to be data fields. Note: py2store mongo tools have now been moved to the mongodol project. Import from there. Requires pymongo. from mongodol.stores import MongoStore # Note: project moved to mongodol now # The following makes a default MongoStore, the default pymongo.MongoClient settings, # and db_name='py2store', collection_name='test', key_fields=('_id',) store = MongoStore() basic_test(store, k={'_id': 'foo'}, v={'val': 'bar', 'other_val': 3}) But it can get annoying to specify the key as a dict every time. The key schema is fixed, so you should be able to just specify the tuple of values making the keys. And you can, with MongoTupleKeyStore from mongodol.stores import MongoTupleKeyStore # Note: project moved to mongodol now store = MongoTupleKeyStore(key_fields=('_id', 'name')) basic_test(store, k=(1234, 'bob'), v={'age': 42, 'gender': 'unspecified'}) S3, SQL, Zips, Dropbox S3 persister/stores work pretty much like LocalStores, but stores in S3. You'll need to have an account with AWS to use this. Find S3 stores in py2store.stores.s3_stores. SQL give you read and write access to SQL DBs and tables. ZipReader (and other related stores) talks to one or several files, giving you the ability to operate as if the zips were uncompressed. Dropbox will give you access to dropbox files through the same dict-like interface. Miscellenous Caching There's some basic caching capabilities in py2store. Basic, but covers a lot of use cases. But if you want to bring your own caching tools, you might be able to use them here too. For example, the very popular cachetools uses a dict as it's default cache store, but you can specify any mutable mapping (that takes tuples as keys!). Say you want to use local files as your cache. Try something like this: from cachetools import cached # there's also LRUCache, TTLCache... from py2store import QuickPickleStore, wrap_kvs def tuple_to_str(k: tuple, sep: str=os.path.sep) -> str: return sep.join(k) if isinstance(k, tuple): return os.path.sep.join(k) else: return k def str_to_tuple(k: str, sep: str=os.path.sep) -> tuple: return k.split(sep) @wrap_kvs(id_of_key=tuple_to_str, key_of_id=str_to_tuple) class TupledQuickPickleStore(QuickPickleStore): """A local pickle store with tuple keys (to work well with cachetools)""" local_files_cache = TupledQuickPickleStore() # no rootdir? Fine, will choose a local file @cached(cache=local_files_cache) def hello(x='world'): return f"hello {x}!" >>> hello('QT') >>> import pickle >>> # Let's now verify that we actually have a file with such content >>> with open(os.path.join(local_files_cache._prefix, 'QT'), 'rb') as fp: ... file_contents = pickle.load(fp) >>> assert file_contents == 'hello QT!' Philosophical FAQs Is a store an ORM? A DAO? Call it what you want, really. It would be tempting to coin py2store as ya(p)orm (yet another (python) object-relational mapping), but that would be misleading. The intent of py2store is not to map objects to db entries, but rather to offer a consistent interface for basic storage operations. In that sense, py2store is more akin to an implementation of the data access object (DAO) pattern. Of course, the difference between ORM and DAO can be blurry, so all this should be taken with a grain of salt. Advantages and disadvantages such abstractions are easy to search and find, but in most cases the pros probably outweigh the cons. Most data interaction mechanisms can be satisfied by a subset of the collections.abc interfaces. For example, one can use python's collections.Mapping interface for any key-value storage, making the data access object have the look and feel of a dict, instead of using other popular method name choices such for such as read/write, load/dump, etc. One of the dangers there is that, since the DAO looks and acts like a dict (but is not) a user might underestimate the running-costs of some operations. Should storage transform the data? When does "storing data" not transform data? The answer is that storage almost always transforms data in some way. But some of these transformations are taken for granted, because there's so often "attached" (i.e. "co-occur") with the raw process of storing. In py2store, the data transformation is attached to (but not entangled with) the store object. This means you have a specific place where you can check or change that aspect of storage. Having a consistent and simple interface to storage is useful. Being able to attach key and value transformations to this interface is also very useful. But though you get a lot for cheap, it's not free: Mapping the many (storage systems operations) to the one (consistent interface) means that, through habit, you might project some misaligned expectations. This is one of the known disadvantages of Data Access Objects (DAOs)) Have a look at this surreal behavior: # defining the store from py2store.base import Store class MyFunnyStore(Store): def _obj_of_data(self, data): return f'hello {data}' # trying the store out s = MyFunnyStore() s['foo'] = 'bar' # put 'bar' in 'foo' assert s['foo'] == 'hello bar' # the value that 'foo' contains SEEMS to be 'hello bar' # so look how surreal that can be: s['foo'] = s['foo'] # retrieve what's under 'foo' and store it back into 'foo' assert s['foo'] == 'hello hello bar' # what the... s['foo'] = s['foo'] # retrieve what's under 'foo' and store it back into 'foo' assert s['foo'] == 'hello hello hello bar' # No no no! I do not like green eggs and ham! This happens, because though you've said s['foo'] = 'bar', the value returned by s['foo'] is actually 'hello bar'. Why? Because though you've stored 'bar', you're transforming the data when you read it (that's what _obj_of_data does). Is that a desirable behavior? Transforming the stored data before handing it to the user? Well, this is such a common pattern that it has it's own acronym and tools named after the acronym: ETL. Extract, Transform, Load. What is happening here is that we composed extraction and transformation. Is that acceptable? Say I have a big store of tagged audio files of various formats but only want to work with files containing the 'gunshot' tag and lasting no more than 10s, and further get the data as a waveform (a sequence of samples). You'd probably find this acceptable: audio_file_type=type_of(file) with open(file, 'wb') as fp: file_bytes = fp.read() wf = convert_to_waveform(file_bytes) Or this: filt = mk_file_filter(tag='gunshot', max_size_s=10) for file in filter(filt, audio_source): with open(file, 'wb') as fp: file_bytes = fp.read() wf = convert_to_waveform(file_bytes, audio_file_type=type_of(file)) send_wf_for_analysis(wf) You might even find it acceptable to put such code in a functions called get_waveform_from_file, or generator_of_waveforms_of_filtered_files. So why is it harder to accept something where you make a store that encompasses your needs. You do s = WfStore(audio_source, filt) and then wf = s[some_file] # get a waveform or for wf in s.values(): # iterate over all waveforms send_wf_for_analysis(wf) It's harder to accept precisely because of the simplicity and consistency (with dict operations). We're used to s[some_file] meaning "give me THE value stored in s, in the 'some_file' slot". We're not used to s[some_file] meaning "go get the data stored in some_file and give it to me in a format more convenient for my use". Stores allow you to compose extraction and transformation, or transformation and loading, and further specifying filter, caching, indexing, and many other aspects related to storage. Those, py2store helps you create the perspective you want, or need. That said, one needs to be careful that the simplicity thus created doesn't induce misuse. For example, in the MyFunnyStore example above, we may want to use a different store to persist and to read, and perhaps reflect their function in their names. For example: # defining the store from py2store.base import Store class ExtractAndTransform(Store): def _obj_of_data(self, data): return f'hello {data}' store = Store() extract_and_transform = ExtractAndTransform(store) store['foo'] = 'bar' # put 'bar' in 'foo' assert store['foo'] == 'bar' # the value that store contains for 'foo' is 'bar' assert extract_and_transform['foo'] == 'hello bar' # the value that extract_and_transform gives you is 'bar' Some links Presentation at PyBay 2019: ETL: Extract, Transform, Load: ORM: Object-relational mapping: DAO: Data access object: DRY: SOC: Separation Of Concerns: COC: Convention Over Configuration: Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/py2store/
CC-MAIN-2022-40
refinedweb
7,903
63.19
On 8/25/06, Martijn Faassen <[EMAIL PROTECTED]> wrote: I actually think it would be *nice*, if not at all essential, to have a common namespace for community work. For such a common namespace "z3c" is a bit unwieldier than something like "zorg". We could consider establishing a more coherent pattern in the future, perhaps. Advertising As mentioned in another thread, I'd like to avoid namespace clutter, but I have come to realize I have no good arguments for that. :-) So, if we can't agree, then it's just yet another namespace. zope, zc, lovely, schooltool, codespeak(?), nuxeo, z3lab, z3c and now zorg. If there is no interest in merging basic toolkits into the same namespace (which there overwhelmingly was not in the other discussion) then having both z3c and zorg can't be a problem either. ;) -- Lennart Regebro, Nuxeo CPS Content Management _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
https://www.mail-archive.com/zope3-dev@zope.org/msg06018.html
CC-MAIN-2017-30
refinedweb
155
63.59
Date: Feb 15, 2013 1:16 PM Author: Michael Stemper Subject: Re: infinity can't exist In article <511D0025.FD5B3D49@btinternet.com>, Frederick Williams <freddywilliams@btinternet.com> writes: >Craig Feinstein wrote: >> Let's say I have a drawer of an infinite number of [...] >> >> How does modern mathematics resolve this paradox? > >A few years ago Zdislav V. Kovarik made a post listing a dozen or more >meaning of the word "infinity" as used in different branches of >mathematics. I'm hoping that he won't mind me reposting it: > > >There is a long list of "infinities (with no claim to exhaustiveness): > infinity of the one-point compactification of N, [snip] > infinity in the theory of convex optimization, > etc.; > > each of these has a clear definition and a set of well-defined rules > for handling it. > > And the winner is... > the really, really real infinity imagined by inexperienced debaters of > foundations of mathematics; this one has the advantage that it need > not be defined ("it's just there, don't you see?") and the user can > switch from one set of rules to another, without warning, and without > worrying about consistency, for the purpose of scoring points in idle > and uneducated (at least on one side) debates. Bravo! Author! Author! -- Michael F. Stemper #include <Standard_Disclaimer> A preposition is something that you should never end a sentence with.
http://mathforum.org/kb/plaintext.jspa?messageID=8339124
CC-MAIN-2015-27
refinedweb
225
53.61
Simple and modern Node.js wrapper implementation for Tesseract OCR CLI. import recognize from 'tesseractocr' const text = await recognize('/path/to/image.png') console.log('Yay! Text recognized:', text) Note: Despite that it's encouraged to use the more modern promise-based API, the good old callbacks are still supported. The overall API documentation can be found here There is a hard dependency on the Tesseract project. You can find installation instructions for various platforms on the project site. For Homebrew users, the installation is quick and easy. brew install tesseract You can then go about installing the Node.js package to expose the JavaScript API: npm install tesseractocr Clone the repo, npm install and then npm test or npm run benchmarks. The project's changelog is available here
https://openbase.com/js/tesseractocr
CC-MAIN-2022-21
refinedweb
129
59.5
Last updated on MARCH 02, 2017 Applies to:Siebel Tools - Version 8.2.2.4.17 [IP2013] and later Information in this document applies to any platform. Symptoms Validation is not firing when field value is removed and tabbed out or while reforming a refine query. On Sample Tools/Client IP2013, PatchSet 17 on PUB HLS Incident Form Applet - Header, we have a field, Source Organization which is a join based field. On the PUB HLS Incident, we have the following code on the PreSetFieldValue event, function BusComp_PreSetFieldValue (FieldName, FieldValue) { if(FieldName == "Source Organization") { if(FieldValue == null || FieldValue == "") TheApplication().RaiseErrorText("Cannot be blank"); } return (ContinueOperation); } Issue Description: When the user selects a record having an existing value in the Source Organization field, then clears the existing value in this field, and steps off the record. User is prompted with an expected error message 'Cannot be blank'. When the user clicks 'Ok' and perfoms a refine query or navigates to other tabs, the field becomes blank and the value gets updated to null. Steps to Reproduce: 1. Write the above mentioned in the BusComp_PreSetFieldValue event. 2. Compile and open the application. 3. Navigate to Incidents tab - All Incidents screen. 4. Select a record which has the 'Source Organization' field populated. 5. Select the value in the field and press Delete or backspace. 6. Value is now blank. Step off the Record. 7. View the error message 'Cannot be blank'. 8. Perform a refine query or 'Alt + Enter' on this screen. 9. Observe results. Expected Result: The previous value should stay intact without being updated to blank Actual Result: The field value gets updated to blank. Cause My Oracle Support provides customers with access to over a Million Knowledge Articles and hundreds of Community platforms
https://support.oracle.com/knowledge/Siebel/2114399_1.html
CC-MAIN-2017-39
refinedweb
293
57.06
This is the fourth in a series of blog posts on the Windows Subsystem for Linux (WSL). For background information you may want to read the architectural overview, introduction to pico processes and WSL system calls blog posts. Posted on behalf of Sven Groot. Introduction One, and this post looks into how WSL bridges those two worlds. File systems on Linux Linux abstracts file systems operations through the Virtual File System (VFS), which provides both an interface for user mode programs to interact with the file system (through system calls such as open, read, chmod, stat, etc.) and an interface that file systems have to implement. This allows multiple file systems to coexist, providing the same operations and semantics, with VFS giving a single namespace view of all these file systems to the user. File systems are mounted on different directories in this namespace. For example, on a typical Linux system your hard drive may be mounted at the root, /, with directories such as /dev, /proc, /sys, and /mnt/cdrom all mounting different file systems which may be on different devices. Examples of file systems used on Linux include ext4, rfs, FAT, and others. VFS implements the various system calls for file system operations by using a number of data structures such as inodes, directory entries and files, and related callbacks that file systems must implement. Inodes The inode is the central data structure used in VFS. It represents a file system object such as a regular file, directory, symbolic link, etc. An inode contains information about the file type, size, permissions, last modified time, and other attributes. For many common Linux disk file systems such as ext4, the on-disk data structures used to represent file metadata directly correspond to the inode structure used by the Linux kernel. While an inode represents a file, it does not represent a file name. A single file may have multiple names, or hard links, but only one inode. File systems provide a lookup callback to VFS which is used to retrieve an inode for a particular file, based on the parent inode and the child name. File systems must implement a number of other inode operations such as chmod, stat, open, etc. Directory entries VFS uses a directory entry cache to represent your file system namespace. Directory entries only exist in memory, and contain a pointer to the inode for the file. For example, if you have a path like /home/user/foo, there is a directory entry for home, user, and foo, each with a pointer to an inode. Directory entries are cached for fast lookup, but if an entry is not yet in the cache, the inode lookup operation is used to retrieve the inode from the file system so a new directory entry can be created. File objects When an inode is opened, a file object is created for that file which keeps track of things like the file offset and whether the file was opened for read, write or both. File systems must provide a number of file operations such as read, write, sync, etc. File descriptors Applications refer to file objects through file descriptors. These are numeric values, unique to a process, that refer to any files the process has open. File descriptors can refer to other types of objects that provide a file-like interface in Linux, including ttys, sockets, and pipes. Multiple file descriptors can refer to the same file object, e.g. through use of the dup system call. Special file types Besides just regular files and directories, Linux supports a number of additional file types. These include device files, FIFOs, sockets, and symbolic links. Some of these files affect how paths are parsed. Symbolic links are special files that refer to a different file or directory, and following them is handled seamlessly by VFS. If you open the path /foo/bar/baz and bar is a symbolic link to /zed, then you will actually open /zed/baz instead. Similarly, a directory may be used as a mount point for another file system. In this case, when a path crosses this directory, all inode operations below the mount point go to the new file system. Special and pseudo file systems Linux uses a number of file systems that don’t read files from a disk. TmpFs is used as a temporary, in-memory file system, whose contents will not be persisted. ProcFs and SysFs both provide access to kernel information about processes, devices and drivers. These file systems do not have a disk, network or other device associated with them, and instead are virtualized by the kernel. File systems on Windows Windows generalizes all system resources into objects. These include not just files, but also things like threads, shared memory sections, and timers, just to name a few. All requests to open a file ultimately go through the Object Manager in the NT kernel, which routes the request through the I/O Manager to the correct file system driver. The interface that file system drivers implement in Windows is more generic and enforces fewer requirements. For example, there is no common inode structure or anything similar, nor is there a directory entry; instead, file system drivers such as ntfs.sys are responsible for resolving paths and opening file objects. File systems in Windows are typically mounted on drive letters like C:, D:, etc., although they can be mounted on directories in other file systems as well. These drive letters are actually a construct of Win32, and not something that the Object Manager directly deals with. The Object Manager keeps a namespace that looks similar to the Linux file system namespace, rooted in \, with file system volumes represented by device objects with paths like \Device\HarddiskVolume1. When you open a file using a path like C:\foo\bar, the Win32 CreateFile call translates this to an NT path of the form \DosDevice\C:\foo\bar, where \DosDevice\C: is actually a symbolic link to, for example, \Device\HarddiskVolume4. Therefore, the real full path to the file is actually \Device\HarddiskVolume4\foo\bar. The object manager resolves each component of the path, similar to how VFS would in Linux, until it encounters the device object. At this point, it forwards the request to the I/O manager, which creates an I/O Request Packet (IRP) with the remaining path, which it sends to the file system driver for the device. File objects When a file is opened, the object manager creates a file object for it. Instead of file descriptors, the object manager provides handles to file objects. Handles can actually refer to any object manager object, not just files. When you call a system call like NtReadFile (typically through the Win32 ReadFile function), the I/O manager again creates an IRP to send down to the file system driver for the file object to perform the request. Because there are no inodes or anything similar in NT, most operations on files in Windows require a file object. Reparse points Windows only supports two file types: regular files and directories. Both files and directories can be reparse points, which are special files that have a fixed header and a block of arbitrary data. The header includes a tag that identifies the type of reparse point, which must be handled by a file system filter driver, or for built-in reparse point types, the I/O manager itself. Reparse points are used to implement symbolic links and mount points. In these cases, the tag indicates that the reparse point is a symbolic link or mount, and the data associated with the reparse point contains the link target, or volume name for mount points. Reparse points can also be used for other functionality such as the placeholder files used by OneDrive in Windows 8. Case sensitivity Unlike Linux, Windows file systems are by default case preserving, but not case sensitive. In actuality, Windows and NTFS do support case sensitivity, but this behavior is not enabled by default. File systems in WSL The Windows Subsystem for Linux must translate various Linux file system operations into NT kernel operations. WSL must provide a place where Linux system files can exist, with all the functionality required for that including Linux permissions, symbolic links and other special files such as FIFOs; it must provide access to the Windows volumes on your system; and it must provide special file systems such as ProcFs. To facilitate this, WSL has a VFS component that is modeled after the VFS on Linux. The overall architecture is shown below. When an application calls a system call, this is handled by the system call layer, which defines the various kernel entry points such as open, read, chmod, stat, etc. For these file-related system calls, the system call layer has very little functionality; it basically just forwards the call to VFS. For operations that use paths (such as open or stat), VFS resolves the path using a directory entry cache. If an entry is not in the cache, it calls into one of several file system plugins to create an inode for the entry. These plugins provide inode operations like lookup, chmod, and others, similar to the inode operations used by the Linux kernel. When a file is opened, VFS uses the file system’s inode open operation to create a file object, and returns a file descriptor for that file object. System calls operating on the file descriptor (such as read, write or sync) call file operations defined by the file systems. This system is deliberately very close to how Linux behaves, so WSL can support the same semantics. VFS defines several file system plugins: VolFs and DrvFs are used to represent files on disk, and the remainder are the in-memory file system TmpFs and pseudo file systems such as ProcFs, SysFs, and CgroupFs. VolFs and DrvFs are where Linux file systems meet Windows file systems. They are how WSL interacts with files on your disks, and serve two different purposes: VolFs is designed to provide full support for Linux file system features, and DrvFs is designed for interop with Windows. Let’s look at these file systems in more detail. VolFs The primary file system used by WSL is VolFs. It is used to store the Linux system files, as well as the content of your Linux home directory. As such, VolFs supports most features the Linux VFS provides, including Linux permissions, symbolic links, FIFOs, sockets, and device files. VolFs is used to mount the VFS root directory, using %LocalAppData%\lxss\rootfs as the backing storage. In addition, a few additional VolFs mount points exist, most notably /root and /home which are mounted using %LocalAppData%\lxss\root and %LocalAppData%\lxss\home respectively. The reason for these separate mounts is that when you uninstall WSL, the home directories are not removed by default, so any personal files stored there will be preserved. Note that all these mount points use directories in your Windows user folder for storage. Each Windows user has their own WSL environment, and can therefore have Linux root privileges and install applications without affecting other Windows users. Inodes and file objects Since Windows has no related inode concept, VolFs must keep a handle to a Windows file object in an inode. When VFS requests a new inode using the lookup callback, VolFs uses the handle from the parent inode and the name of the child to perform a relative open and get a handle for the new inode. These handles are opened without any read/write access to the files, and can only be used for metadata requests. When a file is opened, VolFs creates a Linux file object that points to the inode. It also reopens the inode’s file handle with the requested read/write access and stores the new handle in the file object. This handle is then used to satisfy file operations like read and write. Emulating Linux features As discussed above, Linux diverges from Windows in several ways for file systems. VolFs must provide support for several Linux features that are not directly supported by Windows. Case sensitivity is handled by Windows itself. As mentioned earlier, Windows and NTFS actually support case sensitive operations, so VolFs simply requests the Object Manager to treat paths as case sensitive regardless of the global registry key controlling this behavior. Linux also supports nearly all characters as legal characters in file names. NT has more restrictions, where some characters are not allowed at all and others may have special meanings (such as ‘:’ denoting an alternate data stream). To support all Linux file names, VolFs escapes illegal characters in file names.. Inodes in Linux have a number of attributes which don’t exist in Windows, including their owner and group, the file mode, and others. These attributes are stored in NTFS Extended Attributes associated with the files on disk. The following information is stored in the Extended Attributes: - Mode: this includes the file type (regular, symlink, FIFO, etc.) and the permission bits for the file. - Owner: the user ID and group ID of the Linux user and group that own the file. - Device ID: for device files, the device major and minor number of the device. Note that WSL currently does not allow users to create device files on VolFs. - File times: the file accessed, modified and changed times on Linux use a different format and granularity than on Windows, so these are also stored in the EAs. In addition, if a file has any file capabilities, these are stored in an alternate data stream for the file. Note that WSL currently does not allow users to modify file capabilities for a file. The remaining inode attributes, such as inode number and file size, are derived from information kept by NTFS. Interoperability with Windows While VolFs files are stored in regular files on Windows in the directories mentioned above, interoperability with Windows is not supported. If a new file is added to one of these directories from Windows, it lacks the EAs needed by VolFs, so VolFs doesn’t know what to do with the file and simply ignores it. Many editors will also strip the EAs when saving an existing file, again making the file unusable in WSL. Additionally, since VFS caches directory entries, any modifications to those directories that are made from Windows while WSL is running may not be accurately reflected. DrvFs To facilitate interoperability with Windows, WSL uses the DrvFs file system. WSL automatically mounts all fixed drives with supported file systems under /mnt, such as /mnt/c, /mnt/d, etc. Currently, only NTFS and ReFS volumes are supported. DrvFs operates in a similar fashion as VolFs. When creating inodes and file objects, handles are opened to Windows files. However, in contrast to VolFs, DrvFs adheres to Windows rules (with a few exceptions, noted below). Windows permissions are used, only legal NTFS file names are allowed, and special file types such as FIFOs and sockets are not supported. DrvFs permissions Linux usually uses a simple permission model where a file allows read, write or execute access to either the owner of the file, the group, or everyone else. Windows instead uses Access Control Lists (ACLs) that specify complex access rules for each individual file and directory (Linux does also have the ability to use ACLs, but this is not currently supported in WSL). When opening a file in DrvFs, Windows permissions are used based on the token of the user that executed bash.exe. So in order to access files under C:\Windows, it’s not enough to use “sudo” in your bash environment, which gives you root privileges in WSL but does not alter your Windows user token. Instead, you would have to launch bash.exe elevated to gain the appropriate permissions. In order to give the user a hint about the permissions they have on files, DrvFs checks the effective permissions a user has on a file and converts those to read/write/execute bits, which can be seen for example when running “ls -l”. However, there is not always a one-to-one mapping; for example, Windows has separate permissions for the ability to create files or subdirectories in a directory. If the user has either of these permissions, DrvFs will report write access on the directory, while in fact some operations may still fail with access denied. Since your effective access to a file may differ depending on whether bash.exe was launched elevated or not, the file permissions shown in DrvFs will also change when switching between elevated and non-elevated bash instances. When calculating the effective access to a file, DrvFs takes the read-only attribute into account. A file with the read-only attribute set in Windows will show up in WSL as not having write permissions. Chmod can be used to set the read-only attribute (by removing all write permissions, e.g. “chmod a-w some_file”) or clear it (by adding any write permissions, e.g. “chmod u+w some_file”). This behavior is similar to the CIFS file system in Linux, which is used to access Windows SMB shares. Case sensitivity Since the support is there in Windows and NTFS, DrvFs supports case sensitive files. This means it’s possible to create two files whose name only differs by case in DrvFs. Note that many Windows applications may not be able to handle this situation, and may not be able to open one or both of the files. Case sensitivity is disabled on the root of your volumes, but is enabled everywhere else. So in order to use case sensitive files, do not attempt to create them under /mnt/c, but instead create a directory where you can create the files. Symbolic links While NT supports symbolic links, we could not rely on this support because symbolic links created by WSL may point to paths like /proc which have no meaning in Windows. Additionally, NT requires administrator privileges to create symbolic links. So, another solution had to be found. Unlike VolFs, we could not rely on EAs to indicate a file is a symbolic link in DrvFs. Instead, WSL uses a new type of reparse point to represent symbolic links. As a result, these links will work only inside WSL and cannot be resolved by other Windows components such as File Explorer or cmd.exe. Note that since ReFS lacks support for reparse points, it also doesn’t support symbolic links in WSL. NTFS however now has full symbolic link support in WSL. Interoperability with Windows Unlike VolFs, DrvFs does not store any additional information. Instead, all inode attributes are derived from information used in NT, by querying file attributes, effective permissions, and other information. DrvFs also disables directory entry caching to ensure it always presents the correct, up-to-date information even if a Windows process has modified the contents of a directory. As such, there is no restriction on what Windows processes can do with the files while DrvFs is operating on them. DrvFs also uses Windows delete semantics for files, so a file cannot be unlinked if there are any open file descriptors (or handles from Windows processes) to the file. ProcFs and SysFs Like in Linux, these special file systems do not show files that exist on disk, but instead represent information kept by the kernel about processes, threads, and devices. These files are dynamically generated when read. In some cases, the information for the files is kept entirely inside the lxcore.sys driver. In other cases, such as the CPU usage of a process, WSL queries the NT kernel for this information. However, there is no interaction here with Windows file systems. Conclusion WSL provides access to Windows files by emulating full Linux behavior for the internal Linux file system with VolFs, and by providing full access to Windows drives and files through DrvFs. As of this writing, DrvFs enables some of the functionality of Linux file systems, such as case sensitivity and symbolic links, while still supporting interoperability with Windows. In the future, we will continue to improve our support for Linux file system features, not only in VolFs but also in DrvFs. The goal is to reduce the number of scenarios that require you to stay in the VolFs mounts with all the limitations on interoperability that entails. These improvements are driven by the great feedback we get from the community on GitHub and User Voice to help us target the most important scenarios. Sven Groot and Seth Juarez explore WSL file system support. Inodes in Linux have a number of attributes which don’t exist >in Linux In Windows? Previous comment is broken. Inodes in Linux have a number of attributes which don’t exist in Linux… In Windows? The original comment actually meant : Inodes in Linux have a number of attributes which don’t exist in Windows Subsystem for Linux. How do you handle Unicode characters in paths, both in VolFS and DrvFS? utf-8 on the WSL side, hopefully? What happens when you hit a path that is not valid utf-16 on the NT side or a path that is not valid utf-8 (or whatever is used) on the WSL side? Does DrvFS see NT symbolic links and junction as symbolic links? What about NT mount points? L., WSL accepts utf-8 paths from the Linux side, and translates them to utf-16 when dealing with Windows. Paths with invalid characters in them would cause an error when you attempt to use them. NT junctions, symbolic links and mount points (basically, any reparse point other than our custom Linux symbolic link type) are not supported. The reason is that they can cross over into another volume, which would have a different, possibly incompatible filesystem, with possibly conflicting file IDs, and a whole host of other issues that our driver currently isn’t set up to deal with. Improving our interop in this area is definitely something we’re continuing to investigate. Thanks! Another great article! Possible Typo: “Inodes in Linux have a number of attributes which don’t exist in Linux” Two suggestions: Use interoperable NTFS symlinks in DrvFS when possible Add symlinks for standard %USERPROFILE% subdirectories to $HOME By the way, do you have plans to support fuse filesystems? What about NFSv4 (iirc the Windows NFS *client* does not support v4, only the server does) and NFSv4 ACLs? Realy Great Article 🙂 inotify not work in directories . Please update is on inotify status. Github://Microsoft/bashonwindows/issues/216 Where is the WSL File-system stored within windows? I’d like to make sure I am capturing backups of it periodically. %USERPROFILE%\AppData\Local\lxss\rootfs While I haven’t used the WSL, yet, I do have a question about development and portability. For the thought process I’m in, I’ll preface to say that since Windows 8, we’ve had the ability to mount VHD(X) files as a logical device. Would it be possible to expand the VOIFs and DrvFs support for using a VHD(X) as the base file system (either as just a configured NTFS volume for WSL to use or even more directly as a direct device within the pico processes emulating the Linux Kernel) Ideally, this would make a WSL subsystem configuration transportable, similar to a client OS in a HyperV environment. This would help with development boxes where tools, makefiles, and other errata on a project – or perhaps as a core utilities distributable set within an organization – which could be easily transported. I’m quite excited having played around a bit with WSL after the anniversary update of Windows 10. But I discovered one issue. In the Windows Disk Management I have mounted one hard drive as a subdirectory on the C: drive. But I cannot access let alone browse this directory from bash on Windows. Are there plans to address this problem? i can’t activate bash on windows 14393.0 32 bits does windows bash support 32 bit windows ? if doesn’t please support it i want running bash on windows 32 bit How are you handling the MAX_PATH limitation of 260 chars as imposed by the Win32 API? NPM (Node Package Manager) in particular tends to create folder hierarchies that exceed this limitation, and if this isn’t resolved here, it would preclude the usage of WSL for many popular repositories. Additionally, it appears that Git has issues when working on shared directories, which is how the windows filesystem is represented. Do you have any proposed solutions to that problem as well? The upper limit on Windows NT is 32767, not 260 — has been for a long time, you just have to know how to ask. Just prefix the drive letter with \\?\ — for more information That was done so that an app does not accidentally get a > 260 character path. On Windows 10, you can enable longer paths in the core API’s system wide via Group Policy. You can also enable them in individual applications via the application manifest file — instructions in the article. The catch to that is, you may have apps that are allocating 260 character buffers — so when you copy 10000 characters into that buffer, you’re overwriting who knows what. App could crash, it could create security issues, etc. And so best to enable it only if you control both the app and any extensions that might make assumptions about the buffer size. Why does WSL not recognize the JDK installation done with a Windows Application Installer (taken from Oracle Java website) ? WSL applications can not directly interact with true win32 apps ( a windows installed JVM is win 32). There might be some interoperability issues with MKS. I have an old copy of this toolkit which I use for basic unix style command lines tools such as ls, awk and grep. But my real reason is the vi editor. I do a lot of manuscript preparation and want the convenience of vi. I know this is also available through gvim (and others). But the main point is that after going through the installation steps, the bash command does not bring up a license. I might remove the MKS toolkit and try again. I need to be able to access USB devices. Example: /dev/ttyUSB0 etc. When will this be supported? How do I get to know of updates? BTW, how do I check my current installed version? Admittedly a nit, but: .” Makes it sound as if Linux is unique in the characteristic that removal of a filename can occur while there are open handles to the file data represented by that filename; in fact, all Unix derived operating systems share this characteristic.
https://blogs.msdn.microsoft.com/wsl/2016/06/15/wsl-file-system-support/
CC-MAIN-2017-34
refinedweb
4,469
60.95
Hello I was messing around with Microsoft Visual C++ 2008 Express. I am new to programming and am trying to learn C++ So here is a simple program I made: At first, I had the 14th line say: while(HottestGirl=1||2||3||4)At first, I had the 14th line say: while(HottestGirl=1||2||3||4)Code:#include <iostream> using namespace std; int main() { int HottestGirl=1; cout << "Who is the hottest girl?\n"; cout << "1)Claudia Lynx\n"; cout << "2)Megan Fox\n"; cout << "3)Audrina Patridge\n"; cout << "\n"; while(HottestGirl=1) { cout << "Selection:"; cin >> HottestGirl; switch (HottestGirl) { case 1: cout << "Claudia Lynx is the hottest girl!\n\n"; break; case 2: cout << "Megan Fox is the hottest girl!\n\n"; break; case 3: cout << "Audrina Patridge is the hottest girl!\n\n"; break; default: cout << "Invalid Selection\n\n"; HottestGirl=4; break; } } cin.ignore(); cin.get(); } Therefore, no matter what number the user enters, it will still go in a loop and ask for Selection again. But then I changed it to: while(HottestGirl=1) Just to see what happens The problem is, it still goes in a loop and asks for the selection! Why is this? Shouldn't it only loop when the user enters 1 for the variable HottestGirl? I also have another minor question if anyone would like to help I was going along with the tutorial and it tells me to put cin.ignore(); after having the user input a value and to also have cin.get(); at the end of the program so i can see the results of the program. Could someone explain this a little bit more for me please. I mean I kinda understand how the cin.ignore(); removes the enter and the cin.get(); waits for the user to press enter, but its still kinda bothering me, I don't really get it. Also, are there any other ways to be able to see the results of the program without it just running the code then auto closing. I think I saw someone you can do like System Pause or something a long time ago. Thanks for any help in advance!
http://cboard.cprogramming.com/cplusplus-programming/117470-simple-program-i-made-question-about-code.html
CC-MAIN-2015-11
refinedweb
365
80.41
mt(7) mt(7) NAME mt - magnetic tape interface and controls for stape and tape2 DESCRIPTION This entry describes the behavior of HP magnetic tape interfaces and controls, including reel-to-reel, DDS, QIC, 8mm, and 3480 tape drives. The files /dev/rmt/* refer to specific raw tape drives, and the behavior of each given unit is specified in the major and minor numbers of the device special file. Naming Conventions There are two naming conventions for device special files. The standard (preferred) convention is used on systems that support long file names. An alternate convention is provided for systems limited to short file names. The following standard convention is recommended because it allows for all possible configuration options in the device name and is used by mksf(1M) and insf(1M): /dev/rmt/c#t#d#[o][z][e][p][s[#]][w]density[C[#]][n][b] The following alternate naming convention is provided to support systems in which the /dev/rmt directory requires short file names. These device special file names are less descriptive, but guarantee unique device naming and are used by mksf(1M) and insf(1M) where required. /dev/rmt/c#t#d#[f#|i#][n][b] For each tape device present, eight device files are automatically created when the system is initialized. Four of these device files utilize either the standard (long file name) or alternate (short file name) naming conventions. When the standard naming convention is being utilized, these four files contain the density specification "BEST". When the alternate naming convention is being utilized, these four files contain the density specification "f0". There are four such files because each of the four different permutations of the "n" and "b" options (see below) is available. The remaining four files automatically created when the system is initialized utilize the pre-HP-UX 10.0 device file naming convention. This includes an arbitrary number to distinguish this tape device from others in the system, followed by the letter m. There are four such files because each of the four different permutations of the n and b options (see below) is available. These files are created as a usability feature for pre-HP-UX 10.0 users who do not wish to acquire familiarity with the standard or alternate naming conventions. Each of the automatically created four device files which utilize the standard or alternate naming conventions is linked to a device file which utilizes the pre-HP-UX 10.0 naming convention. Thus, the device files which utilize the pre-HP-UX 10.0 naming convention provide the Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000 mt(7) mt(7) same functionality as the device files which contain the density specification BEST (standard naming convention) or f0 (alternate naming convention). Options The options described here are common to all tape drivers. The c#t#d# notation in the device special file name derives from ioscan output and is described on the manpages for ioscan(1M) and intro(7). Options unique to stape and tape2 are described later in this manpage, in the DEPENDENCIES section. c# Instance number assigned by the operating system to the interface card. t# Target address on a remote bus (for example, SCSI address) d# Device unit number at the target address (for example, SCSI LUN). w Writes wait for physical completion of the operation before returning status. The default behavior (buffered mode or immediate reporting mode) requires the tape device to buffer the data and return immediately with successful status. density Density or format used in writing data to tape. This field is designated by the following values: BEST Highest-capacity density or format will be used, including data compression, if the device supports compression. NOMOD Maintains the density used for data previously written to the tape. Behavior using this option is dependent on the type of device. This option is only supported on DDS and 8MM drives. DDS Selects one of the known DDS formats; can be used to specify DDS1 or DDS2, as required. DLT Selects one of the known DLT formats; can be used to specify DLT42500_24, DLT42500_56, DLT62500_64, DLT81633_64, or DLT85937_52, as required. QIC Selects one of the known QIC formats; can be used to specify QIC11, QIC24, QIC120, QIC150, QIC525, QIC1000, QIC1350, QIC2100, QIC2GB, or QIC5GB, as required. Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000 mt(7) mt(7) D8MM Selects one of the known 8MM formats; can be used to specify D8MM8200 or D8MM8500, as required. D Selects a reel-to-reel density; can be used to specify D800, D1600, or D6250, as required. D3480 Specifies that the device special file communicates with a 3480 device. (There is only one density option for 3480.) D[#] Specifies density as a numeric value to be placed in the SCSI mode select block descriptor. The header file <sys/mtio.h> contains a list of the standard density codes. The numeric value is used only for density codes which cannot be found in this list. C[#] Write data in compressed mode, on tape drives that support data compression. If a number is included, use it to specify a compression algorithm specific to the device. Note, compression is also provided when the density field is set to BEST. n No rewind on close. Unless this mode is requested, the tape is automatically rewound upon close. b Specifies Berkeley-style tape behavior. When the b is absent, the tape drive follows AT&T-style behavior. The details are described in "Tape Behavioral Characteristics" below. f# Specify format (or density) value encoded in the minor number. The meaning of the value is dependent on the type of tape device in use. (Used for short file name notation only.) i# Specify an internal Property Table index value maintained by the tape driver, containing an array of configuration options. The contents of this table are not directly accessible. Use the lssf(1M) command to determine which configuration options are invoked. (Used for short file name notation only.) Sample Tape Device Special File Names For a QIC150 device at SCSI address 3, card instance 2, with default block size, buffered mode, and AT&T-style with rewind on close, the standard device special file name is /dev/rmt/c2t3d0QIC150. Hewlett-Packard Company - 3 - HP-UX Release 11i: November 2000 mt(7) mt(7) For a device at card instance 1, target 2, LUN 3, with exhaustive mode enabled (see DEPENDENCIES), fixed block size of 512 bytes, DDS1 density with compression, AT&T-style with no rewind on close, the standard device file special name is /dev/rmt/c1t2d3es512DDS1Cn. For a system requiring short file names, the same device special file would be named /dev/rmt/c1t2d3i<<<<#>>>>n, where <<<<#>>>> is an index value selected by the tape driver. Use the lssf(1M) command to determine which configuration options are actually used with any device file. The naming convention defined above should indicate the options used, but device files may be created with any user defined name. Tape Behavioral Characteristics When opened for reading or writing, the tape is assumed to be positioned as desired. When a file opened for writing is closed, two consecutive EOF (End of File) marks are written if, and only if, one or more writes to the file have occurred. The tape is rewound unless the no-rewind mode has been specified, in which case the tape is positioned before the second EOF just written. For QIC devices only one EOF mark is written and the tape is positioned after the EOF mark (if the no-rewind mode has been specified). When a file open for reading (only) is closed and the no-rewind bit is not set, the tape is rewound. If the no-rewind bit is set, the behavior depends on the style mode. For AT&T-style devices, the tape is positioned after the EOF following the data just read (unless already at BOT or Filemark). For Berkeley-style devices, the tape is not repositioned in any way. Each read(2) or write(2) call reads or writes the next record on the tape. For writes, the record has the same length as the buffer given (within the limits of the hardware). During a read, the record size is passed back as the number of bytes read, up to the buffer size specified. Since the minimum read length on a tape device is a complete record (to the next record mark), the number of bytes ignored (for records longer than the buffer size specified) is available in the mt_resid field of the mtget structure via the MTIOCGET call of ioctl(2). Current restrictions require tape device application programs to use 2-byte alignment for buffer locations and I/O sizes. To allow for more stringent future restrictions (4-byte aligned, etc.) and to maximize performance, page alignment is suggested. For example, if the target buffer is contained within a structure, care must be taken that structure elements before the buffer allow the target buffer to begin on an even address. If need be, placing a filler integer before the target buffer will insure its location on a 4-byte boundary. Hewlett-Packard Company - 4 - HP-UX Release 11i: November 2000 mt(7) mt(7) The ascending hierarchy of tape marks is defined as follows: record mark, filemark (EOF), setmark and EOD (End of Data). Not all devices support all types of tape marks but the positioning within the hierarchy holds true. Each type of mark is typically used to contain one or more of the lesser marks. When spacing over a number of a particular type of tape mark, hierarchically superior marks (except EOD) do not terminate tape motion and are included in the count. For instance, MTFSR can be used to pass over record marks and filemarks. Reading an EOF mark is returned as a successful zero-length read; that is, the data count returned is zero and the tape is positioned after the EOF, enabling the next read to return the next record. DDS devices and the 8mm 8505 device also support setmarks , which are used to delineate a group (set) of files. For the 8mm 8505 setmarks are only supported when the density is set to 8500 plus compression. Reading a setmark is also returned as a zero-length read. Filemarks, setmarks and EOD can be distinguished by unique bits in the mt_gstat field. Spacing operations (back or forward space, setmark, file or record) position past the object being spaced to in the direction of motion. For example, back-spacing a file leaves the tape positioned before the file mark; forward-spacing a file leaves the tape positioned after the file mark. This is consistent with standard tape usage. For QIC devices, spacing operations can take a very long time. In the worst case, a space command could take as much as 2 hours! While this command is in progress, the device is not accessible for any other commands. lseek(2) type seeks on a magnetic tape device are ignored. Instead, the ioctl(2) operations below can be used to position the tape and determine its status. The header file <sys/mtio.h> has useful information for tape handling. The following is included from <sys/mtio.h> and describes the possible tape operations: /* mag tape I/O control requests */ #define MTIOCTOP _IOW('m', 1, struct mtop) /* do mag tape op */ #define MTIOCGET _IOR('m', 2, struct mtget) /* get tape status */ /* structure for MTIOCTOP - mag tape op command */ struct mtop { short mt_op; /* operations defined below */ daddr_t mt_count; /* how many of them */ Hewlett-Packard Company - 5 - HP-UX Release 11i: November 2000 mt(7) mt(7) }; /* operations */ #define MTWEOF 0 /* write filemark (may eject) */ #define MTNOP 7 /* no operation, may set status */ #define MTEOD 8 /* DDS, QIC and 8MM only - seek to end-of-data */ #define MTWSS 9 /* DDS and 8MM only - write setmark(s) */ #define MTFSS 10 /* DDS and 8MM only - space forward setmark(s) */ #define MTBSS 11 /* DDS and 8MM only - space backward setmark(s) */ #define MTSTARTVOL 12 /* Start a new volume (for ATS) */ #define MTENDVOL 13 /* Terminate a volume (for ATS) */ #define MTRES 14 /* Reserve Device */ #define MTREL 15 /* Release Device */ #define MTERASE 16 /* Erase media */ /* structure for MTIOCGET - mag tape get status command */ struct mtget { long mt_type; /* type of magtape device */ long mt_resid; /* residual count */ /* The following two registers are device dependent */ long mt_dsreg1; /* status register (msb) */ long mt_dsreg2; /* status register (lsb) */ /* The following are device-independent status words */ long mt_gstat; /* generic status */ long mt_erreg; /* error register */ daddr_t mt_fileno; /* No longer used - always set to -1 */ daddr_t mt_blkno; /* No longer used - always set to -1 */ Information for decoding the mt_type field can be found in <sys/mtio.h>. Other Tape Status Characteristics Efficient use of streaming tape drives with large internal buffers and immediate-reporting require the following end-of-tape procedures: All writes near LEOT (Logical End of Tape) complete without error if actually written to the tape. Once the tape driver determines that LEOT has been passed, subsequent writes do not occur and an error message is returned. Hewlett-Packard Company - 6 - HP-UX Release 11i: November 2000 mt(7) mt(7) To write beyond this point (keep in mind that streaming drives have already written well past LEOT), simply ask for status using the MTIOCGET ioctl. If status reflects the EOT condition, the driver drops all write barriers. For reel-to-reel devices, caution must be exercised to keep the tape on the reel. When immediate-reporting is enabled, the tape2 driver will drop out of immediate mode and flush the device buffer with every write filemark or write setmark. The stape driver will flush the device buffers when a write filemark or write setmark command is given with the count set to zero. When immediate-reporting is disabled, the write encountering LEOT returns an error with the tape driver automatically backing up over that record. When reading near the end-of-tape, the user is not informed of LEOT. Instead, the typical double EOF marks or a pre-arranged data pattern signals the logical end-of-tape. Since magnetic tape drives vary in EOT sensing due to differences in the physical placement of sensors, any application (such as multiple- tape cpio(1) backups) requiring that data be continued from the EOT area of one tape to another tape must be restricted. Therefore, the tape drive type and mode should be identical for the creation and reading of the tapes. The following macros are defined in <sys/mtio.h> for decoding the status field mt_gstat returned from MTIOCGET. For each macro, the input parameter <x> is the mt_gstat field. GMT_BOT(x) Returns TRUE at beginning of tape. GMT_EOD(x) Returns TRUE if End-of-Data is encountered for DDS, QIC or 8MM. GMT_EOF(x) Returns TRUE at an End-of-File mark. GMT_EOT(x) Returns TRUE at end of tape. GMT_IM_REP_EN(x) Returns TRUE if immediate reporting mode is enabled. GMT_ONLINE(x) Returns TRUE if drive is on line. GMT_SM(x) Returns TRUE if setmark is encountered. GMT_WR_PROT(x) Returns TRUE if tape is write protected. GMT_COMPRESS(x) Returns TRUE if data compression is enabled. Hewlett-Packard Company - 7 - HP-UX Release 11i: November 2000 mt(7) mt(7) GMT_DENSITY(x) Returns the currently configured 8-bit density value. Supported values are defined in <sys/mtio.h>. GMT_QIC_FORMAT(x) and GMT_8mm_FORMAT(x) Return the same information as does GMT_DENSITY(x). GMT_DENSITY(x) is preferred because GMT_QIC_FORMAT and GMT_8mm_FORMAT may be obsoleted at some future date. GMT_D_800(x) Returns TRUE if the density encoded in mt_gstat is 800 bpi. GMT_D_1600(x) Returns TRUE if the density encoded in mt_gstat is 1600 bpi. GMT_D_6250(x) Returns TRUE if the density encoded in mt_gstat is 6250 bpi (with or without compression). GMT_D_6250c(x) Returns TRUE if the density encoded in mt_gstat is 6250 bpi plus compression. GMT_D_DDS1(x) Returns TRUE if the density encoded in mt_gstat is DDS1 (with or without compression). GMT_D_DDS1c(x) Returns TRUE if the density encoded in mt_gstat is DDS1 plus compression. GMT_D_DDS2(x) Returns TRUE if the density encoded in mt_gstat is DDS2 (with or without compression). GMT_D_DDS2c(x) Returns TRUE if the density encoded in mt_gstat is DDS2 plus compression. GMT_D_DLT_42500_24(x) Returns TRUE if the density encoded in mt_gstat is 42500 bpi, 24 track pairs. GMT_D_DLT_42500_56(x) Returns TRUE if the density encoded in mt_gstat is 42500 bpi, 56 track pairs. GMT_D_DLT_62500_64(x) Returns TRUE if the density encoded in mt_gstat is 62500 bpi (with or without compression). GMT_D_DLT_62500_64c(x) Returns TRUE if the density encoded in mt_gstat is 62500 bpi plus compression. Hewlett-Packard Company - 8 - HP-UX Release 11i: November 2000 mt(7) mt(7) GMT_D_DLT_81633_64(x) Returns TRUE if the density encoded in mt_gstat is 81633 bpi (with or without compression). GMT_D_DLT_81633_64c(x) Returns TRUE if the density encoded in mt_gstat is 81633 bpi plus compression. GMT_D_DLT_85937_52(x) Returns TRUE if the density encoded in mt_gstat is 85937 bpi (with or without compression). GMT_D_DLT_85937_52c(x) Returns TRUE if the density encoded in mt_gstat is 85937 bpi plus compression. GMT_D_3480(x) Returns TRUE if the density encoded in mt_gstat is for a 3480 device (with or without compression). GMT_D_3480c(x) Returns TRUE if the density encoded in mt_gstat is for a 3480 device with compression. GMT_D_QIC_11(x) Returns TRUE if the density encoded in mt_gstat is QIC-11 format. GMT_D_QIC_24(x) Returns TRUE if the density encoded in mt_gstat is QIC-24 format. GMT_D_QIC_120(x) Returns TRUE if the density encoded in mt_gstat is QIC-120 format. GMT_D_QIC_150(x) Returns TRUE if the density encoded in mt_gstat is QIC-150 format. GMT_D_QIC_525(x) Returns TRUE if the density encoded in mt_gstat is QIC-525 format. GMT_D_QIC_1000(x) Returns TRUE if the density encoded in mt_gstat is QIC-1000 format. GMT_D_QIC_1350(x) Returns TRUE if the density encoded in mt_gstat is QIC-1350 format. GMT_D_QIC_2100(x) Returns TRUE if the density encoded in mt_gstat is QIC-2100 format. GMT_D_QIC_2GB(x) Returns TRUE if the density encoded in mt_gstat is QIC-2GB format. GMT_D_QIC_5GB(x) Returns TRUE if the density encoded in mt_gstat is QIC-5GB format. Hewlett-Packard Company - 9 - HP-UX Release 11i: November 2000 mt(7) mt(7) GMT_D_8MM_8200(x) Returns TRUE if the density encoded in mt_gstat is 8 millimeter 8200 format (with or without compression). GMT_D_8MM_8200c(x) Returns TRUE if the density encoded in mt_gstat is 8 millimeter 8200 format with compression. GMT_D_8MM_8500(x) Returns TRUE if the density encoded in mt_gstat is 8 millimeter 8500 format (with or without compression). GMT_D_8MM_8500c(x) Returns TRUE if the density encoded in mt_gstat is 8 millimeter 8500 format with compression. GMT_MEDIUM(x) Identifies the 8-bit medium type value describing the tape currently loaded into the tape device. The reported value is only valid for QIC and 8mm devices. Supported values are defined in <sys/mtio.h>. GMT_QIC_MEDIUM(x) Returns the same information as does GMT_MEDIUM(x). GMT_MEDIUM(x) is preferred because GMT_QIC_MEDIUM may be obsoleted at some future date. GMT_DR_OPEN(x) Does not apply to any currently supported devices. Always returns FALSE. HP-UX silently enforces a tape record blocking factor (MAXPHYS) on large I/O requests. For example, a user write request with a length of ten times MAXPHYS will actually reach the media as ten separate records. A subsequent read (with ten times MAXPHYS as a length) will look like a single operation to the user, even though HP-UX has broken it up into ten separate read requests to the driver. The blocking function is transparent to the user during writes. It is also transparent during reads unless: + The user picks an arbitrary read length greater than MAXPHYS. + The user attempts to read a third-party tape containing records larger than MAXPHYS. Since the value for MAXPHYS is relatively large (usually >= 256K bytes), this is typically not a problem. The MTNOP operation does not set the device-independent status word. Hewlett-Packard Company - 10 - HP-UX Release 11i: November 2000 mt(7) mt(7) 3480 stacker devices are supported only in auto (that is, sequential- access) mode. To advance to the next tape in the stack, an MTIOCTOP control request specifying an MTOFFL operation should be issued. An MTIOCGET control request should then be issued to determine whether or not the stacker has been successfully advanced. Failure on the MTIOCGET operation (or an offline status) indicates that no more tapes are available in the stacker, the stacker has been ejected, and user intervention is required to load a new stack. EXAMPLES Assuming that fd is a valid file descriptor, the following example writes two consecutive filemarks on the tape: #include <<<<sys/types.h>>>> #include <<<<sys/mtio.h>>>> struct mtop mtop; mtop.mt_op = MTWEOF; mtop.mt_count = 2; ioctl(fd, MTIOCTOP, &&&&mtop); If fd is a valid file descriptor for an open DDS drive, the following example spaces forward to just past the next setmark: #include <<<<sys/types.h>>>> #include <<<<sys/mtio.h>>>> struct mtop mtop; mtop.mt_op = MTFSS; mtop.mt_count = 1; ioctl(fd, MTIOCTOP, &&&&mtop); Given that fd is a valid file descriptor for an opened tape device, and that it has just returned 0 from a read(2) request. The following system call verifies that the tape has just read a filemark: #include <<<<sys/types.h>>>> #include <<<<sys/mtio.h>>>> struct mtget mtget; ioctl(fd, MTIOCGET, &&&&mtget); if (GMT_EOF (mtget.mt_gstat)) { /* code for filemark detection */ } WARNINGS Density specifications BEST (standard naming convention) or f0 (alternate naming convention) activate data compression on tape Hewlett-Packard Company - 11 - HP-UX Release 11i: November 2000 mt(7) mt(7) devices which support compression. This is also true for the files using the pre-HP-UX 10.0 naming convention which are linked to these files (see "Naming Conventions" above). This means that a tape written using one of the eight device files (which are automatically created when the system is initialized) on a tape device which supports data compression, will contain compressed data. This tape cannot be successfully read on a tape device which does not support compressed data. For example, a tape written (using one of the eight automatically created device files) on a newer DDS device which supports data compression cannot be read on an older DDS device which does not support data compression. To accomplish data interchange between devices in a case such as this, a new device file must be manually created using the mksf(1M) command. In the above example, the options specified to mksf(1M) should include a density option with an argument of DDS1, and must not include a compression option. Use the mksf(1M) command instead of the mknod(1M) command to create tape device files. As of the 10.0 release, there are more configuration options than will fit in the device file's minor number. Prior to the 10.0 release, it was possible to select configuration options by directly setting the bits in the device special file's minor number using mknod(1M). As of the 10.0 release, a base set of configuration options are contained in the minor number. Extended configuration options are stored in a table of configuration properties. The minor number may contain an index into the property table, which is maintained by the tape driver and is not directly visible to the user. The mksf(1M) command sets the minor number and modifies the property table as needed based on mnemonic parameters passed into the command. If your device configuration requirements are limited to the base set of options, you need not be concerned with the property table. The base configuration options are as follows: + hardware address (card instance, target, and unit number) + density (from the set of pre-defined options listed in mksf(1M)) + compression (using the default compression algorithm) + rewind or no rewind + Berkeley or AT&T mode All other configuration options are extended options that result in use of the property table. Hewlett-Packard Company - 12 - HP-UX Release 11i: November 2000 mt(7) mt(7) It is recommended that all tape device files be put in the /dev/rmt directory. All tape device files using extended configuration options must be put in the /dev/rmt directory. This is required for proper maintenance of the property table. Device files using a extended configuration options located outside the /dev/rmt directory may not provide consistent behavior across system reboots. Use the rmsf(1M) command to clean up unused device files. Otherwise, the property table may overflow and cause the mksf(1M) command to fail. Density codes listed in <sys/mtio.h> have device-dependent behaviors. See the hardware manual for your tape device to find which densities are valid. For some devices, these values may be referred to as formats instead of densities. Use of unbuffered mode can reduce performance and increase media wear. Reads and writes from/to older (fixed block) devices such as QIC150 must occur at exact multiples of the supported block size. Write operations on a QIC device can be initiated only at BOT or EOD. QIC devices will not allow writes with the tape positioned in the middle of recorded data. The offline operation puts the QIC drive offline. The cartridge is not ejected as is done for DDS. To put the drive back online, the cartridge has to be manually ejected and then reinserted. Sequential-access devices that use the SCSI I/O interface may not always report true media position. On a 3480 device with data compression enabled, writing of a single record that cannot be compressed to less than 102,400 bytes is not supported. Note that using the 8200 format on 8500-style 8mm devices will significantly reduce tape capacity, and that only the 8500c-density setting provides support for setmarks. The maximum I/O request for 8mm devices is limited to 240KB. DEPENDENCIES Driver-Specific Options for stape (major number 205) The following options may be used in creating device special files for tape drives that access the stape driver: e Exhaustive mode is enabled (default is disabled). When exhaustive mode is enabled, the driver will, if necessary, attempt several different configuration options when opening a device. The first attempt follows the minor Hewlett-Packard Company - 13 - HP-UX Release 11i: November 2000 mt(7) mt(7) number configuration exactly, but if that fails, the driver attempts other likely configuration values. With Exhaustive mode disabled, the driver makes only one attempt to configure a device using the configuration indicated in the minor number. p Specifies a partitioned tape whose currently active partition is partition 1 (closest to BOT (beginning of tape)). Optional partition 1 is closest to BOT for possible use as a volume directory. The default partition without this option is partition 0. If partitioning is unsupported, the entire tape is referred to as partition 0. s[#] Specifies fixed-block mode; the optional number indicates the block size. If the number is not present, the driver selects a default block size appropriate to the device type. Driver Specific Options for tape2 (major number 212) The following options may be used in creating device special files for tape drives that access the tape2 driver: o Diagnostic messages to the console are suppressed. z The tape driver will attempt to mimic the behavior of RTE systems; that is, the driver will not do any tape alteration or movement when the device is closed. AUTHOR mt was developed by HP and the University of California, Berkeley. FILES /dev/rmt/* tape device special files <<<<sys/mtio.h>>>> constants and macros for use with tapes /etc/mtconfig configuration property table for tapes /dev/rmt/*config device files for accessing configuration properties table - for internal use only SEE ALSO dd(1), mt(1), ioctl(2), insf(1M), lssf(1M), mksf(1M), rmsf(1M), Configuring HP-UX for Peripherals Hewlett-Packard Company - 14 - HP-UX Release 11i: November 2000
http://modman.unixdev.net/?sektion=7&page=mt&manpath=HP-UX-11.11
CC-MAIN-2017-13
refinedweb
4,660
53.92
I've created a app in asp.net core and create a dockerfile to generate a local image and run it. FROM microsoft/dotnet:latest COPY . /app WORKDIR /app RUN ["dotnet", "restore"] RUN ["dotnet", "build"] EXPOSE 5000/tcp ENTRYPOINT ["dotnet", "run", "--server.urls", ""] docker build -t jedidocker/first . docker run -t -d -p 5000:5000 jedidocker/first ERR_EMPTY_RESPONSE I had the same issue and found the solution to this. You will need to change the default listening hostname. By default, the app will listen to the localhost, ignoring any incoming requests from outside the container. Change your code in program.cs to listen for all incoming calls: public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseUrls("http://*:5000") .UseIISIntegration() .UseStartup<Startup>() .Build(); host.Run(); } } More info: Make sure you are running the container with the -p flag with the port binding. docker run -p 5000:5000 <containerid> At the time of this writing, it seems that the EXPOSE 5000:5000 command does not have the same effect as the -p command. I haven't had real success with -P (upper case) either. Edit 8/28/2016: One thing to note. If you are using docker compose with the -d (detached) mode, it may take a little while for the dotnet run command to launch the server. Until it's done executing the command (and any other before it), you will receive ERR_EMPTY_RESPONSE in Chrome.
https://codedump.io/share/JvqPuBQpnG6w/1/cannot-acces-aspnet-core-on-local-docker-container
CC-MAIN-2017-34
refinedweb
243
57.47
From the Journals of Tarn Barford Mar 13, 2009 I decided I'd have a look at Django to get a feel for a different web framework and to learn more about Python itself. Its no secret that I've been really interested in IronPython recently and I'm excited to give Python a go without the backing of the .NET framework (but with a complete standard library implementation, unlike IronPython). For this post I basically just followed the installation guide and the first 4 tutorials in the excellent Django Documentation. Its not my intention to reproduce the tutorials in this post, I really just want to discuss things I found interesting while learning the framework. After some fooling around I found the following versions of Python, Django and MySql were the easiest way to get started using Django on my Vista box. All of the versions below have windows installers so its pretty straight forward. Python 2.5.4 (MySQL Python didn't have an installer for 2.6) MySql 5.0 (I had problems installing 5.1 on my Vista Box, so I reverted to 5.0 which just worked) I deliberately didn't install the GUI tools for MySql as I wanted to get hardcore from the console. Djangos ORM can be used directly from the Python console and MySql has a complete command line interface. Django comes with a helper script to assist setting up and managing a site. To do things like create a new web project all you need to do is run following command: python django-admin.py startproject [sitename] This creates the skeleton project structure and files. . .. manage.py settings.py urls.py __init__.py You can immediately validate everything is on the right track by running the development server. python manage.py runserver I was pretty excited I could just create a new database from the same console window c:\Working>mysql -u root -p Enter password: ******* Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.0.77-community-nt MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> CREATE DATABASE demodb; Query OK, 1 row affected (0.03 sec) All I then needed to do was add my database credentials to the settings.py. Once this was setup I could use the syncdb command to build all the tables required by the framework (which includes an authorization module). python manage.py syncdb Which, without me doing anything else, creates the tables. Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site As with most MVC web application frameworks, there is a standard project structure convention. Django offers another command to quickly create an app, which is basically a small web application. python manage.py startapp weblog This creates a sub folder with the following files empty files __init__.py models.py views.py Then custom models can be added to the models python file. (Yes, I see that we probably would want more characters for a post, but I just want to get something going for now) class Post(models.Model): content = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Comment(models.Model): post = models.ForeignKey(Post) comment = models.CharField(max_length=200) To make the next bit of magic happen, we have to register this class in the main settings python file of this project. INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'DjangoSite.weblog', ) Now the syncdb command can be used again to create the new tables for our model. c:\Working\DjangoSite\DjangoSite>python manage.py syncdb Creating table weblog_post Creating table weblog_comment Installing index for weblog.Comment model We can now start checking out the ORM provided by Django in the interactive Python console. >>> from DjangoSite.weblog.models import * >>> import datetime >>> >>> post = Post(content = "Django Rocks!", pub_date=datetime.datetime.now()) >>> post.save() >>> >>> comment = Comment(post = post, comment="+1") >>> comment.save() >>> >>> findPost = Post.objects.get(id=post.id) >>> findPost.content u'Django Rocks!' >>> >>> comments = findPost.comment_set.all() >>> for c in comments: ... print c.comment ... >>> findPost.delete() That's pretty cool, and just 6 lines of code to create the models! A few lines need to be uncommented in the urls python file, another line added to the installed apps list and the new table needs to added to the database to enable the default administration features. For the mean time this will serve pages to manage user accounts, later we'll be able to use the administration features to create and a web administration interface for any of our models. The urlpatterns object is pretty cool. Its basically the URL routing and it uses regular expressions to map requests to handler scripts. More on this before this post is finished. from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root) , ) Now if we run the server admin pages can be found from /admin directory of the site. I won't go into to much detail about how its done but by simply registering a model, web pages are provided to manage that model (finding, adding, deleting and updating). Its all very clever, the framework does lots of introspection to display the correct html controls for each field type. To separate the UI from the model, a class can be created which is used to guide how the admin framework renders the pages for the model, it is then used to map fields to properties on the model object. Getting some weblog pages setup is amazingly straight forward. Its just a matter of adding a regular expression into the urlpatterns that matches the type of URL you want to handle and a function to handle it. The most basic handler method is what you'd expect, it takes a http request and returns a http response. from django.http import HttpResponse def index(request): return HttpResponse("This is the posts page!") This can be expanded to actually use the weblog models. The code below gets the 5 most recent posts and creates a string with each post content separated with a tag. from DjangoSite.weblog.models import Post, Comment from django.http import HttpResponse def index(request): lastestPosts = Post.objects.all().order_by('-pub_date')[:5] output = '<br />'.join([p.content for p in lastestPosts]) return HttpResponse(output) Well that's pretty cool, but I don't think we'll be writing entire pages using standard Python. Besides, these look more like controllers than views, there must be more to it. With templates, we can throw objects at a template to be rendered. The index method can now be separated from the rendering of HTML from django.shortcuts import render_to_response from DjangoSite.weblog.models import Post, Comment from django.http import HttpResponse def index(request): latestPosts = Post.objects.all().order_by('-pub_date')[:5] return render_to_response('weblog/index.html', {'posts': latestPosts}) And the template (weblog/index.html) looks something like this {% if posts %} <ul> {% for post in posts %} <li>{{ post.content }}</li> {% endfor %} </ul> {% else %} <p>There are no posts</p> {% endif %} It worth noting this isn't just Python embedded in HTML. I think this text from the Django documentation describes this design decision: ..you'll want to bear in mind that the Django template system is not simply Python embedded into HTML. This is by design: the template system is meant to express presentation, not program logic It makes sense to me. A forms post data can be retrieved from a dictionary on the request object. def index(request): postTitle = resquest.POST["postTitle"] ... I'm sure Django has support for mapping posts to model objects, but I haven't seen it yet. It turns out there is heaps more support in Django for handling generic web problems. For example creating list and details pages is so common that Django has some generic functionality to help. You just need to provide a model and write some templates to describe how to render the models. It appears there is heaps more of the magic throughout the framework some things I haven't yet looked into are: I've just stepped through all the basic steps to building a functional database driven web application. Overall I'm pretty impressed; it didn't take much time, code or leaps in faith to get it going. It appears to do generic things, like what I was doing, is even easier as the framework has generic support for it. I am glad I got into it from the ground up, I think I would have been a lot more skeptical had I written a simple weblog without writing a line of Python. I've come to learn that its not about the 90% things that are easy to do in a web application framework, its about the 10% of things you want to do but don't work out of the box. Of course I haven't had enough exposure with Django to say how deep the application framework actually goes. I personally feel that Django and Python are a fantastic language and framework to deal with the 10% of requirements that don't come out of the box. On anther note, I personally love coding from the console rather than an IDE, even with limited exposure I was getting pretty quick navigating my project, running the Django web development helpers and managing MySql. I personally think it's faster developing from a console with all your tools accessible without taking your hands of the keyboard. Its been lots of fun playing with the framework. I'd definitely like to build a site in Django some time, but I'm not sure when I'll find the time. There are no comments
http://tarnbarford.net/journal/will-you-django-with-me
CC-MAIN-2014-52
refinedweb
1,660
55.95
CodePlexProject Hosting for Open Source Software Is there a way to change to placement position of a list? I have a list that is currently rendered at the end of a container content item, I would like to have it rendered before other content. I have tried to use the placement file to change its position but can't seem to figure out what the name of the template is? I notice that a list is rendered using a Template Method called List, but can't seem to override its placement. Any ideas? Thanks. List :) It should work. What does your placement look like? Ok, thats what I thought, I also tried adding in the full namespace! Here is my placement file, I'm just trying to hide it first to make sure its being applied... <Placement> <Place Parts_Common_Metadata_Summary="-"/> <Place Parts_Common_Metadata="-"/> <Place List="-"/> <Match ContentType="Course"> <Place Fields_Contrib_TaxonomyField- <Place Fields_Contrib_TaxonomyField- <Place Fields_Contrib_TaxonomyField- <Match DisplayType="Summary"> <Place Fields_Contrib_TaxonomyField- <Place Fields_Contrib_TaxonomyField- <Place Parts_Common_Body_Summary="-"/> </Match> <Match DisplayType="Detail"> <Place Fields_Contrib_TaxonomyField- <Place Fields_Contrib_TaxonomyField- <Place Fields_Contrib_TaxonomyField- <Place Fields_Custom_TextAreaField="-"/> </Match> </Match> <Match ContentType="CourseDelivery"> <Match DisplayType="Summary"> <Place Fields_Contrib_File="Content:after"/> </Match> </Match> </Placement> So why are you sending it to "-"? That will suppress it altogether. I just want to test that it is matching, once I can see it disappearing I will play around with positioning. I'll try it and get back to you. From: bertrandleroy I'll try it and get back to you. From: bertrandleroy We looked with Sébastien and that won't quite work because of the way lists are getting renedered currently. Sébastien can provide more details. Not sure how to work around it yet. From: bertrandleroy We looked with Sébastien and that won't quite work because of the way lists are getting renedered currently. Sébastien can provide more details. Not sure how to work around it yet. bump.... Sébastien, any ideas on rendering a list in a different position? Cheers :) A change have been submitted on the 1.x branch so that List are now using Drivers, and should be handled by placement.info, using the Parts_Container_Contained key. Sebastian, This maybe a little off topic but if we're currently working with version 1.3 and then we implement the changes you've made, I noticed there are new columns being added in Orchard.Core.Containers.Migrations.Create() that were not there before. If I understand Migrations correctly this will not be called by my existing site and therefore I'm running into errors. Am I missing something? rahulbpatel, you're right, the migration is erroneous. I'll comment on this in where the changes were discussed. Actually - forget I said that ;). There's no problem in the migration, I just got mixed up looking at Hg revision logs. What you'll see is that the Create method runs all the migrations needed, and returns 3 - indicating the database is up to the latest revision. However for an upgrading user, they will instead get UpdateFrom1() and UpdateFrom2() for an upgrade path to the same latest revision. It's just so there's only a single install step for a new installation. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/275502
CC-MAIN-2017-26
refinedweb
559
57.67
I looked for a brass sputnik forever. I would have paid real money, maybe even more than $20, had the right one shown itself. But only the dented, cheapy faux brass, overpriced variety surfaced. Lots of "these sell for $5000 and I am only asking $1500 cash no haggling no dealers firm don't even try to talk me down!" Craigslist ads. Speaking of delusions of grandeur... Well, no big thang. I wanted a Lindsey Adelman anyways. Trendy? Yeah, I guess in certain (blogger) circles it is. But more interpretive than a sputnik. Sculptural. Branch-like. Able to be repositioned with a single impatient hand. I bought the approximately $120 worth of parts from Adelman's online list (this included ten Edison bulbs) and handed my husband the plans. It only took ten minutes to put it together, but another few hours to work out the wiring. It's not that the instructions were unclear, it's just that jamming six wires through a tiny pipe was like trying to fit a Costco shopping trip into a NYC apartment kitchen. I bought an old brass ceiling canopy and had the talented husband hardwire it into our existing junction box. The plans call for using it as a swag fixture, but I said no gracias to twenty feet of chains draped all over my dining room. La Boeuf plastered the hanging rod straight into his ceiling, which I've never seen done: M'lady Morgan made a beautiful version: My hanging rod is shorter, since we walk under part of the fixture and we're not headbangers (anymore!) I'm really happy with the look. It has sort of an industrial-classic look. The best thing is that all the parts are solid brass. I thought I would have to apply some vinegar to age it, but my husband assured me that it would oxidize just from being handled while he built it, and he was totally right. Within a couple of days the brass had a nice aged look. This is a good job for your greasy, sweaty teenagers. If you need your brass aged, send it over here. We have sweaty palms and greasy fingers to spare! Yep, Thomas Jefferson's dining set is no more. Someday that will be an oval marble tulip table, instead of the one I rescued from my neighbor's back yard. Someday when people ask where I found a piece, I won't have to give them geographical coordinates but actual store names. Not that I mind. So, what's the consensus? Love or loathe? ..................................................................................... Love. This is the first one I've seen against dark walls and it kills. Nice job! This is totally gorgeous....nice job! Do you think a total amateur could do it?? Awesome! I have had this project bookmarked for our dining area (bye-bye dusty 7 year old clearance shelf West Elm fixture!) I'm glad you did this first, this way I can call you or the talented Mr. Hubs if I have any questions. Your dining room looks fantastic. this looks fantastic! i love it! will have to bookmark this for when i return home from my time in india - it's something i'll definitely be trying...will have to get over my fear of working with wires though! Are you kidding me? I love this!! Martha and her grandmother have been to my house too...I think I might need a little lighting DIY. I have a very similar light fixture that I got from my aunt...but, when I plugged it in the bulbs were too bright...can't seem to find smaller watts for it...sad face. buy dimmers Love the lights! it is adorable!!! I have a website.. nursery chandelier I have to admit, after this post we had a little...explosion. Be really careful with the wiring...we nicked a wire pushing it through and it made contact with the metal rod. Hello! I love your L. Adelman fixture and am planning on putting one in my own dining room. I'm curious how your hubby got the hanging rod into ceiling canopy.. Can you explain that for me? Thank you, and it's beautiful! Hi chefjohnny-apologies for not seeing your comment earlier. The hanging rod is threaded into the ceiling bracket and the ceiling canopy just slides over and some decorative screws keep it in place. If you want us to drop the canopy a bit and show you what it looks like inside, pm me at modernhaus@gmail.com A. says "They did that themselves? Wow, that is soo cool, Mom! Do you think you could do that? You could probably just ask S. and her husband cuz they seem to know how to do lots of stuff." Yep. Totally cool. Love your light fixture and would love to make one too! The details of where to get the parts don't seem to be on the Lindsey Adelman website anymore. Do you happen to have details where you go them. Love the brass base you added! I assumed that most of thе teсh press wаѕ ѕaying Apple's newest incarnation of the iphone case we hope Santa will bring us this Christmas. Zero-in on a few of the things missing from the market? An unruly suspect in the back door of client Melissa Stonecipher's еstrаnged husband's home. Taiwanese electronic giant HTC aims to blockiphone5's import in the US foг a suggеsted retаil pricе οf $199. Is the hanging rod at the top included in the kit or did you purchase separately? I would need mine a bit longer so am curious as to whether or not I will need a special part. Thanks! Your fixture looks great! Sorry to hear about the "explosion!" -- Sarah I love it when people get together and share opinions. Great blog, stick with it! Here is my web-site ... Christian Louboutin Sale I just made one this week! I too am hard-wiring it into my ceiling with a canopy and using a shorter stem as we will walk under it. I did spend some time changing the arms around a little bit so that there's one flexible elbow on one side and two on the other, instead of all three on one side. I need a tad more balance-- my personal preference. And yes! stuffing the wires through the tubes was a challenge. I discovered halfway into the project that they slipped through more easily if I carefully straightened and aligned the wires before pushing them. But getting them through the elbows was even tougher, and because everything screws together and the wires can't move once they are through the elbows, I had to put them through the elbows first and then work outward from there in both directions. It was challenging but fun to figure out. I'm really thrilled with the result. I'm hanging it tomorrow. Also, I love the look of edison bulbs but I switched to low-wattage bulbs years ago and love my low electric bills more. So I searched the internet and found LED bulbs that mimic edison bulbs and are dim-able (!) and come (if you want) with a golden hue to the glass. Also, I thought that I should test the fixture before hard-wiring it in, so I temporarily attached the supplied plug on the bare wires and EUREKA! all the bulbs lit up! --Iv
http://modernhaus.blogspot.com/2011/10/watch-someone-else-make-diy-lindsey.html
CC-MAIN-2018-30
refinedweb
1,252
84.27
In this blog post you’re going to learn how to decode (parse a JSON string) and encode (generate a JSON string) with the ArduinoJson library using the Arduino with the Ethernet shield. This guide also works with the ESP8266 and ESP32 Wi-Fi modules with small changes. Important: this tutorial is only compatible with the ArduinoJSON library 5.13.5. What is JSON? JSON stands for JavaScript Object Notation. JSON is a lightweight text-based open standard design for exchanging data. JSON is primarily used for serializing and transmitting structured data over network connection – transmit data between a server and a client. It is often used in services like APIs (Application Programming Interfaces) and web services that provide public data. JSON syntax basics In JSON, data is structured in a specific way. JSON uses symbols like { } , : ” ” [ ] and it has the following syntax: - Data is represented in key/value pairs - The colon (:) assigns a value to key - key/value pairs are separated with commas (,) - Curly brackets hold objects ({ }) - Square brackets hold arrays ([ ]) For example, to represent data in JSON, the key/value pairs come as follows: {"key1":"value1", "key2":"value2", "key3":"value3"} JSON examples In a real world example, you may want structure data about a user: {"name":"Rui", "country": "Portugal", "age":24} Or in a IoT project, you may want to structure data from your sensors: {"temperature":27.23, "humidity":62.05, "pressure":1013.25} In JSON, the values can be another JSON object (sports) or an array (pets) . For example: { "name": "Rui", "sports": { "outdoor": "hiking", "indoor": "swimming" }, "pets": [ "Max", "Dique" ] } Here we are structuring data about a user and we have several keys: “name”, “sports” and “pets”. The name has the value Rui assigned. Rui may practice different sports relating to where they are practiced. So, we create another JSON object to save Rui’s favorite sports. This JSON object is the value of the “sports” key. The “pets” key has an array that contains Rui’s pets’ names and it has the values “Max” and “Dique” inside. Most APIs return data in JSON and most values are JSON objects themselves. The following example shows the data provided by a weather API. { "coord":{ "lon":-8.61, "lat":41.15 }, "weather":[ { "id":803, "main":"Clouds", "description":"broken clouds", "icon":"04d" } ], "base":"stations", "main":{ "temp":288.15, "pressure":1020, "humidity":93, "temp_min":288.15, "temp_max":288.15 }, (...) } This API provides a lot of information. For example, the first lines store the coordinates with the longitude and latitude. Arduino with Ethernet shield The examples in this post use an Arduino with an Ethernet shield. Just mount the shield onto your Arduino board and connect it to your network with an RJ45 cable to establish an Internet connection (as shown in the figure below). Note: the examples provided in this tutorial also work with the ESP8266 and ESP32 with small changes. Preparing the Arduino IDE The easiest way to decode and encode JSON strings with the Arduino IDE is using the ArduinoJson library 5.13.5 which was designed to be the most intuitive JSON library, with the smallest footprint and most efficiently memory management for Arduino. It has been written with Arduino in mind, but it isn’t linked to Arduino libraries so you can use this library in any other C++ project. There’s also a documentation website for the library with examples and with the API reference. Features - JSON decoding (comments are supported) - JSON encoding (with optional indentation) - Elegant API, very easy to use - Fixed memory allocation (zero malloc) - No data duplication (zero copy) - Portable (written in C++98) - Self-contained (no external dependency) - Small footprint - Header-only library - MIT License Compatible with - Arduino boards: Uno, Due, Mini, Micro, Yun… - ESP8266, ESP32 and WeMos boards - Teensy, RedBearLab boards, Intel Edison and Galileo - PlatformIO, Particle and Energia Installing the ArduinoJson library For this project you need to install the ArduinoJson library in your Arduino IDE: - Click here to download the ArduinoJson version 5.13.5. Decoding JSON – Parse JSON string Let’s start by decoding/parsing the next JSON string: {"sensor":"gps","time":1351824120,"data":[48.756080,2.302038]} Import the ArduinoJson library: #include <ArduinoJson.h>#include <ArduinoJson.h> Arduino JSON uses a preallocated memory pool to store the JsonObject tree, this is done by the StaticJsonBuffer. You can use ArduinoJson Assistant to compute the exact buffer size, but for this example 200 is enough. StaticJsonBuffer<200> jsonBuffer; Create a char array called json[] to store a sample JSON string: char json[] = "{\"sensor\":\"gps\",\"time\":1351824120,\"data\":[48.756080,2.302038]}"; Use the function parseObject() to decode/parse the JSON string to a JsonObject called root. JsonObject& root = jsonBuffer.parseObject(json); To check if the decoding/parsing was successful, you can call root.success(): if(!root.success()) { Serial.println("parseObject() failed"); return false; } The result can be false for three reasons: - the JSON string has invalid syntax; - the JSON string doesn’t represent an object; - the StaticJsonBuffer is too small – use ArduinoJson Assistant to compute the buffer size. Now that the object or array is in memory, you can extract the data easily. The simplest way is to use the JsonObject root: const char* sensor = root["sensor"]; long time = root["time"]; double latitude = root["data"][0]; double longitude = root["data"][1]; You can use the decoded variables sensor, time, latitude or longitude in your code logic. OpenWeatherMap API For a real example using an Arduino with an Ethernet shield, we’re going to use a free API from OpenWeatherMap to request the day’s weather forecast for your chosen location. Learning to use APIs is a great skill because it allows you access to a wide variety of constantly-changing information, such as the current stock price, the currency exchange rate, the latest news, traffic updates, and much more. Using the API_3<< information. In our case, it returns the weather in Porto, Portugal on the day of writing: { "coord": { "lon": -8.61, "lat": 41.15 }, "weather": [ { "id": 701, "main": "Mist", "description": "mist", "icon": "50d" } ], "base": "stations", "main": { "temp": 290.86, "pressure": 1014, "humidity": 88, "temp_min": 290.15, "temp_max": 292.15 }, (...) } Making an API Request with Arduino Now that you have a URL that returns your local weather data. You can automate this task and access that data in your Arduino or ESP8266 projects. Here’s the full script that you need to upload to your Arduino with Ethernet shield to return the temperature in Kelvin and humidity: /* *; // Name address for Open Weather Map API const char* server = "api.openweathermap.org"; // Replace with your unique URL resource const char* resource = "REPLACE_WITH_YOUR_URL_RESOURCE"; // How your resource variable should look like, but with your own COUNTRY CODE, CITY and API KEY (that API KEY below is just an example): //const char* resource = "/data/2.5/weather?q=Porto,pt&appid=bd939aa3d23ff33d3c8f5dd1"; const unsigned long HTTP_TIMEOUT = 10000; // max respone time from server const size_t MAX_CONTENT_SIZE = 512; // max size of the HTTP response byte mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED}; // The type of data that we want to extract from the page struct clientData { char temp[8]; char humidity[8]; }; //)) { if(sendRequest(server, resource) && skipResponseHeaders()) { clientData clientData; if(readReponseContent(&clientData)) { printclientData(&clientData); } } } disconnect(); wait(); } // Open connection to the HTTP server bool connect(const char* hostName) { Serial.print("Connect to "); Serial.println(hostName); bool ok = client.connect(hostName, 80); Serial.println(ok ? "Connected" : "Connection Failed!"); return ok; } // Send the HTTP GET request to the server bool sendRequest(const char* host, const char* resource) { Serial.print("GET "); Serial.println(resource); client.print("GET "); client.print(resource); client.println(" HTTP/1.1"); client.print("Host: "); client.println(host); client.println("Connection: close"); client.println();; } // Parse the JSON from the input string and extract the interesting values // Here is the JSON we need to parse /*{ "coord": { "lon": -8.61, "lat": 41.15 }, "weather": [ { "id": 800, "main": "Clear", "description": "clear sky", "icon": "01d" } ], "base": "stations", "main": { "temp": 296.15, "pressure": 1020, "humidity": 69, "temp_min": 296.15, "temp_max": 296.15 }, "visibility": 10000, "wind": { "speed": 4.6, "deg": 320 }, "clouds": { "all": 0 }, "dt": 1499869800, "sys": { "type": 1, "id": 5959, "message": 0.0022, "country": "PT", "sunrise": 1499836380, "sunset": 1499890019 }, "id": 2735943, "name": "Porto", "cod": 200 }*/ bool readReponseContent(struct clientData* clientData) { // Compute optimal size of the JSON buffer according to what we need to parse. // See); if (!root.success()) { Serial.println("JSON parsing failed!"); return false; } // Here were copy the strings we're interested in using to your struct data strcpy(clientData->temp, root["main"]["temp"]); strcpy(clientData->humidity, root["main"]["humidity"]); // It's not mandatory to make a copy, you could just use the pointers // Since, they are pointing inside the "content" buffer, so you need to make // sure it's still in memory when you read the string return true; } // Print the data extracted from the JSON void printclientData(const struct clientData* clientData) { Serial.print("Temp = "); Serial.println(clientData->temp); Serial.print("Humidity = "); Serial.println(clientData->humidity); } // Close the connection with the HTTP server void disconnect() { Serial.println("Disconnect"); client.stop(); } // Pause for a 1 minute void wait() { Serial.println("Wait 60 seconds"); delay(60000); } Note: make sure your replace the resource variable with your unique OpenWeatherMap URL resource: const char* resource = "REPLACE_WITH_YOUR_URL_RESOURCE"; Modifying the code for your project In this example, the Arduino performs an HTTP GET request to a desired service (in this case the OpenWeatherMap API), but you could change it to request any other web service. We won’t explain the Arduino code line by line. For this project it’s important that you understand what you need to change in the Arduino code to decode/parse any JSON response. Follow these next three steps. STEP #1 – struct Create a data structure that can store the information that you’ll want to extract from the API. In this case, we want to store the temperature and humidity in char arrays: struct clientData { char temp[8]; char humidity[8]; }; STEP #2 – JsonBuffer size Go to the ArduinoJson Assistant and copy the full OpenWeatherMap API response to the Input field. Copy the Expression generated (see the preceding figure), in my case: JSON_ARRAY_SIZE(1) + JSON_OBJECT_SIZE(1) + 2*JSON_OBJECT_SIZE(2) + JSON_OBJECT_SIZE(4) + JSON_OBJECT_SIZE(5) + JSON_OBJECT_SIZE(6) + JSON_OBJECT_SIZE(12) You’ll need to edit the readReponseContent() function with your the generated JsonBuffer size from ArduinoJson Assistant to allocate the appropriate memory for decoding the JSON response from an API: bool readReponseContent(struct clientData* clientData) {); Still inside the readReponseContent() function you need to copy the variables that you need for your project to your struct data: strcpy(clientData->temp, root["main"]["temp"]); strcpy(clientData->humidity, root["main"]["humidity"]); STEP #3 – accessing the decoded data Then, you can easily access the decoded JSON data in your Arduino code and do something with it. In this example we’re simply printing the temperature in Kelvin and humidity in the Arduino IDE serial monitor: void printclientData(const struct clientData* clientData) { Serial.print("Temp = "); Serial.println(clientData->temp); Serial.print("Humidity = "); Serial.println(clientData->humidity); } Demonstration Open the Arduino IDE serial monitor at a baud rate of 9600 and you’ll see the temperature in Kelvin and the humidity in percentage being printed in the Serial monitor every 60 seconds. You can access the rest of the information in the OpenWeatherMap API response, but for demonstration purposes we only decoded the temperature and humidity. Encoding JSON – Generate JSON string Let’s learn how to encode/generate the next JSON string: {"sensor":"gps","time":1351824120,"data":[48.756080,2.302038]} You can read the docs about enconding here. Import the ArduinoJson library: #include <ArduinoJson.h>#include <ArduinoJson.h> Arduino JSON uses a preallocated memory pool to store the object tree, this is done by the StaticJsonBuffer. You can use ArduinoJson Assistant to compute the exact buffer size, but for this example 200 is enough. StaticJsonBuffer<200> jsonBuffer; Create a JsonObject called root that will hold your data. Then, assign the values gps and 1351824120 to the sensor and time keys, respectively: JsonObject& root = jsonBuffer.createObject(); root["sensor"] = "gps"; root["time"] = 1351824120; Then, to hold an array inside a data key, you do the following: JsonArray& data = root.createNestedArray("data"); data.add(48.756080); data.add(2.302038); It is very likely that you’ll need to print the generated JSON in your Serial monitor for debugging purposes, to do that: root.printTo(Serial); After having your information encoded in a JSON string you could post it to another device or web service as shown in the next example. Encoding example with Arduino and Node-RED For this example you need to Node-RED or a similar software that can receive HTTP POST requests. You can install Node-RED on your computer, but I recommend running Node-RED on a Raspberry Pi. Creating the flow In this flow, you’re going to receive an HTTP POST request and print the receive data in the Debug window. Follow these next 6 steps to create your flow: 1) Open the Node-RED software in your browser 2) Drag an HTTP input node and a debug node 3) Edit the HTTP input by adding the POST method and the /json-post-example URL 4) You can leave the default settings for the debug node 5) Connect your nodes 6) To save your application, you need to click the deploy button on the top right corner Your application is saved and ready. Send JSON data with Arduino After having Node-RED prepared to receive POST requests at the /json-post-example URL, you can use the next code example on an Arduino with an Ethernet shield to send data to it. /* *; // Replace with your Raspberry Pi IP address const char* server = "REPLACE_WITH_YOUR_RASPBERRY_PI_IP_ADDRESS"; // Replace with your server port number frequently port 80 - with Node-RED you need to use port 1880 int portNumber = 1880; // Replace with your unique URL resource const char* resource = "/json-post-example"; const unsigned long HTTP_TIMEOUT = 10000; // max respone time from server const size_t MAX_CONTENT_SIZE = 512; // max size of the HTTP response byte mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED}; //, portNumber)) { if(sendRequest(server, resource) && skipResponseHeaders()) { Serial.print("HTTP POST request finished."); } } disconnect(); wait(); } // Open connection to the HTTP server (Node-RED running on Raspberry Pi) bool connect(const char* hostName, int portNumber) { Serial.print("Connect to "); Serial.println(hostName); bool ok = client.connect(hostName, portNumber); Serial.println(ok ? "Connected" : "Connection Failed!"); return ok; } // Send the HTTP POST request to the server bool sendRequest(const char* host, const char* resource) { // Reserve memory space for your JSON data StaticJsonBuffer<200> jsonBuffer; // Build your own object tree in memory to store the data you want to send in the request JsonObject& root = jsonBuffer.createObject(); root["sensor"] = "dht11"; JsonObject& data = root.createNestedObject("data"); data.set("temperature", "30.1"); data.set("humidity", "70.1"); // Generate the JSON string root.printTo(Serial); Serial.print("POST "); Serial.println(resource); client.print("POST "); client.print(resource); client.println(" HTTP/1.1"); client.print("Host: "); client.println(host); client.println("Connection: close\r\nContent-Type: application/json"); client.print("Content-Length: "); client.print(root.measureLength()); client.print("\r\n"); client.println(); root.printTo(client);; } // Close the connection with the HTTP server void disconnect() { Serial.println("Disconnect"); client.stop(); } // Pause for a 1 minute void wait() { Serial.println("Wait 60 seconds"); delay(60000); } Note: make sure your replace the server variable with your Raspberry Pi IP address: const char* server = "REPLACE_WITH_YOUR_RASPBERRY_PI_IP_ADDRESS"; Modifying the code for your project In this example, the Arduino performs an HTTP POST request to Node-RED, but you could change it to make request another web service or server. We won’t explain the Arduino code line by line. For this project it’s important that you understand what you need to change in the Arduino code to encode/generate a JSON request. sendRequest() function For this project you can modify the sendRequest() function with your own JSON data structure: bool sendRequest(const char* host, const char* resource) { Start by reserving memory space for your JSON data. You can use ArduinoJson Assistant to compute the exact buffer size, but for this example 200 is enough. StaticJsonBuffer<200> jsonBuffer; Create a JsonObject called root that will hold your data and assign the values to your keys (in this example we have the sensor key): JsonObject& root = jsonBuffer.createObject(); root["sensor"] = "dht11"; To hold data inside an array, you do the following: JsonObject& data = root.createNestedObject("data"); data.set("temperature", "30.1"); data.set("humidity", "70.1"); Print the generated JSON string in the Arduino IDE serial monitor for debugging purposes: root.printTo(Serial); The rest of the sendRequest() function is the POST request. client.print("POST "); client.print(resource); client.println(" HTTP/1.1"); client.print("Host: "); client.println(host); client.println("Connection: close\r\nContent-Type: application/json"); client.print("Content-Length: "); client.print(root.measureLength()); client.print("\r\n"); client.println(); root.printTo(client); Note that you can use the root.measureLength() to determine the length of your generated JSON. The root.printTo(client) send the JSON data to the Ethernet client. Demonstration Open the Arduino IDE serial monitor at a baud rate of 9600 and you’ll see the JSON object printed in the serial monitor every 60 seconds. In the Node-RED debug tab you’ll see the same JSON object being receive every 60 seconds: Finally, you create functions in Node-RED that do something useful with the received data, but for demonstration purposes we’re just printing the sample data. Wrapping up In this tutorial we’ve shown you several examples on how to decode and encode JSON data. You can follow these basic steps to build more advanced projects that require exchanging data between devices. We hope you’ve found this tutorial useful. If you liked this project and Home Automation make sure you check our course: Build a Home Automation System for $100. 30 thoughts on “Decoding and Encoding JSON with Arduino or ESP8266” Hi, Again an awesome post ! I have been following your post for nearly a year and in almost posts I find a new flavour. Thanks for your good work specially with esp8266. Will keep in touch. You can drop a mail from your personal email id. Thank you, Anupam Majumdar, India You’re welcome! Thanks for reading I have a ESP32, and get error of migration from 5 to 6, this not work, help ? Very very good tutorial regarding JSON. Thanks a lot!! You’re welcome Peter! I finally posted it, I hope it was helpful. I’ve tried to come up with examples that you could take and use in your own project regardless of the API. If you want to use the ESP8266, you simply need to use the HTTP GET or POST examples with WiFi client, but the process of encoding and decoding JSON is the same! Regards, Rui Can you share the esp8266 code as well? Hi. The json library works the same way for Arduino and ESP8266. The code provided refers to an Arduino with an Ethernet shield. To make it work with ESP8266, you need to modify the part in which you establish an Ethernet connection. The rest of the code works the same. I hope this helps. Thanks very good tutorial ; worked on first attempt ; I converted it into ESP8266 thank you very much You’re welcome! Thanks for trying it 🙂 Thanks for this awesome tutorial ? I’m glad to see that you made a good use of ArduinoJson and shared your experience. However, I think the easiest way to install the library is to use the Arduino Library Manager. That way, you don’t need to download the zip file. See: Again thanks for sharing this and keep up with these excellent tutorials. Hi Benoit. Yes, the library manager is now the easiest way to install libraries. However, some of the libraries are not available through that method. With this method, we can guarantee that everyone gets the latest version. But yes, ArduinoJson is in Arduino’s Library Manager and they can automatically install it from there. Thank you for your support. Regards Sara 🙂 Agree Merci beaucoup This was absolutely the most helpful tutorial I have found on this topic. Thank you so much for posting it! I’m trying to adapt this to work with the Darksky.net API, because the weather data they provide is more useful to me. However, I’m getting the “Json parsing failed” message, and I can’t figure out why. I’d like to see the text that it is receiving and trying to parse. Can you tell me how to do that? Are there common mistakes that lead to the “parsing failed” message? I’m a noob, obviously. Trying to finish a project for Xmas. To be honest I’m not sure, it can be many different projects that are causing that issue… Did you try my exact example? Did it work for you? Only after that I would start modifying the project… THANK YOU!! AMAZING TUTORIAL! Hi, the JSON Encode exemple link is off. Can you upload it again ? Hi Roberto. That link worked for the oldest library version. The new one is in this link: github.com/bblanchon/ArduinoJson/blob/6.x/examples/JsonGeneratorExample/JsonGeneratorExample.ino I have to update the links. Thanks for noticing. Regards, Sara can you do the same for esp8266 This works with Arduino and ESP8266. Regards, Sara Muito obrigado por compartilhar um pouco de seu conhecimento, me ajudou bastante. Tudo de bom para você! Thank you, For running a WEMOS using DeepSleep mode on battery , can you delete anything in the Loop, move it to SETUP? Yes. You can do that. Regards, Sara Thank you for the great work, Benoit! As nowadays most of the web services run over secure connections (HTTPS), could you share how to use ArduinoJson (I am trying to use it with the WiFiClientSecure.h library on a NodeMCU ESP8266 V3) Thank you! …And many thanks to Rui for creating this tutorial – I messed around with the authors! ;( Hi Sara! I have an ESP 8266 + BME 280 (server) and a combi board Arduino Mega + ESP 8266, which is connected to a 3.2-inch display. I am trying to start a data transfer scheme on an display. In the combi board, Arduino Mega and ESP 8266 transmit data on Serieal3 (Mega). With the help of your tutorials, I managed to transfer data from the server to the ESP 8266 on the combi board, as well as output to Serial3 data in the following format: HTTP Response code: 200 { “Temp2”: 31.13, “Hum2”: 6.009766, “Pres2”: 733.5735 } Parse JSON string is now required to display the variables Temp2, Hum2, Pres2. But how to do that? I can’t get Parse JSON string from Serial3. I can’t find the answer anywhere. And another question: how to reduce the number of decimal places in json? In Serial the data has 1-2 decimal places, and in json it is too much. Thank you! And thanks again for your work! Thank you for this straight forward tutorial. I was able to setup a Wemos to send data to thingspeak without issues. But I need to send data to our server from the Institute where I work, and I need to know if I can use the GET request instead POST? It is the same? Hello Rui, Sara, I love your tutorials! I have made an ESP32 LED matrix and it currently displays (the day, date and year), (OpenWeather API for up to 5 different cities), (a user message) and (special events). Using ESPAsyncWebServer I have several webpages where I can change many of the ESP settings. I have some of these setting stored in preferences but most are kept in text files. These setting are all loaded/updated is a reboot or web-post. As I progress, I need a method to be able to edit my event list. Currently it is an array hard coded in the sketch. So to add/delete or edit an event I have to recompile and OTA update every time. I believe my solution is to use a JSON file. In fact I could put all my settings into one file. I need some guidance, I have made an eventlist.json, and using a webserver, I can open and display all the events in the array. I haven’t tried using the ESP yet. What I am having trouble with is how to put all the data into a form I can edit, then update the JSON file. Any direction you can point me in will be appreciated. Hi, Take a look at the library documentation, it may help: Regards, Sara Hi, Your tutorial is great. I have my Arduino UNO connected to NodeMCU serially, when i try to serialise json data from Arduino to NodeMCU with hard coded sensor values. And again parse data at NodeMCU. It works well. I tried to capture real values inside the Json in Arduino UNO from the sensors fixed ex. Temperature and Humidity values that are not hard coded. It goes quite well. But when I try to parse the same values in NodeMCU. I could not get the sensor values instead i get the hard coded values inserted in the json structure of parsing program.
https://randomnerdtutorials.com/decoding-and-encoding-json-with-arduino-or-esp8266/?replytocom=731532
CC-MAIN-2022-27
refinedweb
4,219
54.63
#include <mpi.h> int MPI_Init(int *pargc, char ***pargv) MPI specifies no command-line arguments but does allow an MPI implementation to make use of them. LAM/MPI neither uses nor adds any values to the argc and argv parameters. As such, it is legal to pass NULL for both argc and argv in LAM/MPI. Instead, LAM/MPI relies upon the mpirun command to pass meta-information between nodes in order to start MPI programs (of course, the LAM daemons must have previously been launched with the lamboot command). As such, every rank in MPI_COMM_WORLD will receive the argc and argv that was specified with the mpirun command (either via the mpirun command line or an app schema) as soon as main begins. See the mpirun (1) man page for more information. If mpirun is not used to start MPI programs, the resulting process will be rank 0 in MPI_COMM_WORLD , and MPI_COMM_WORLD will have a size of 1. This is known as a "singleton" MPI. It should be noted that LAM daemons are still used for singleton MPI programs - lamboot must still have been successfully executed before running a singleton process. LAM/MPI takes care to ensure that the normal Unix process model of execution is preserved: no extra threads or processes are forked from the user's process. Instead, the LAM daemons are used for all process management and meta-environment information. Consequently, LAM/MPI places no restriction on what may be invoked before MPI_INIT* or after MPI_FINALIZE ; this is not a safe assumption for those attempting to write portable MPI programs - see "Portability Concerns", below. MPI mandates that the same thread must call MPI_INIT (or MPI_INIT_THREAD ) and MPI_FINALIZE . Note that the Fortran binding for this routine has only the error return argument ( MPI_INIT(ierror) ). Because the Fortran and C versions of MPI_INIT are different, there is a restriction on who can call MPI_INIT . The version (Fortran or C) must match the main program. That is, if the main program is in C, then the C version of MPI_INIT must be called. If the main program is in Fortran, the Fortran version must be called. LAM/MPI uses the value of argv[0] to identify a process in many of the user-level helper applications (mpitask and mpimsg, for example). Fortran programs are generally identified as "LAM_MPI_Fortran_program". However, this name can be overridden for Fortran programs by setting the environment variable "LAM_MPI_PROCESS_NAME".). Applications using MPI_INIT are effectively invoking MPI_INIT_THREAD with a requested thread support of MPI_THREAD_SINGLE . However, this may be overridden with the LAM_MPI_THREAD_LEVEL environment variable. If set, this variable replaces the default MPI_THREAD_SINGLE value. The following values are allowed 0: Corresponds to MPI_THREAD_SINGLE 1: Corresponds to MPI_THREAD_FUNNELED 2: Corresponds to MPI_THREAD_SERIALIZED 3: Corresponds to MPI_THREAD_MULTIPLE See MPI_Init_thread(3) for more information on thread level support in LAM/MPI. LAM/MPI defines all required predefined attributes on MPI_COMM_WORLD . Some values are LAM-specific, and require explanation. MPI_UNIVERSE_SIZE This is an MPI-required attribute. It is set to an integer whose value indicates how many CPUs LAM was booted with. See bhost(5) and lamboot(1) for more details on how to specify multiple CPUs per node. Note that this may be larger than the number of CPUs in MPI_COMM_WORLD . LAM_UNIVERSE_NCPUS This is a LAM-specific attribute -- it will not be defined in other MPI implementations. It is actually just a synonym for MPI_UNIVERSE_SIZE -- it contains the number of CPUs in the current LAM universe. Note that this may be larger than the number of CPUs in MPI_COMM_WORLD . LAM_UNIVERSE_NNODES This is a LAM-specific attribute -- it will not be defined in other MPI implementations. It indicates the total number of nodes in the current LAM universe (which may be different from the total number of CPUs). Node that this may be larger than the number of nodes in MPI_COMM_WORLD . The LAM implementation of MPI uses, by default, SIGUSR2 . This may be changed when LAM is compiled, however, with the --with-signal command line switch to LAM's configure script. Consult your system administrator to see if they specified a different signal when LAM was installed. LAM/MPI does not catch any other signals in user code, by default. If a process terminates due to a signal, the mpirun will be notified of this and will print out an appropriate error message and kill the rest of the user MPI application. This behavior can be overridden (mainly for historical reasons) with the "-sigs" flag to mpirun . When "-sigs" is used, LAM/MPI will effectively transfer the signal-handling code from mpirun to the user program. Signal handlers will be installed during MPI_INIT (or MPI_INIT_THREAD ) for the purpose of printing error messages before invoking the next signal handler. That is, LAM "chains" its signal handler to be executed before the signal handler that was already set. Therefore, it is safe for users to set their own signal handlers. If they wish the LAM signal handlers to be executed as well, users should set their handlers before MPI_INIT* is invoked. LAM/MPI catches the following signals SIGSEGV , SIGBUS , SIGFPE , SIGILL All other signals are unused by LAM/MPI, and will be passed to their respective signal handlers. Portable MPI programs cannot assume the same process model that LAM uses (i.e., essentially the same as POSIX). MPI does not mandate anything before MPI_INIT (or MPI_INIT_THREAD ), nor anything after MPI_FINALIZE executes. Different MPI implementations make different assumptions; some fork auxillary threads and/or processes to "help" with the MPI run-time environment (this may interfere with the constructors and destructors of global C++ objects, particularly in the case where using atexit() or onexit(), for example). As such, if you are writing a portable MPI program, you cannot make the same assumptions that LAM/MPI does. In general, it is safest to call MPI_INIT (or MPI_INIT_THREAD ) as soon as possible after main begins, and call MPI_FINALIZE immediately before the program is supposed to end. Consult the documentation for each MPI implementation for their intialize and finalize behavior..
http://www.linuxmanpages.com/man3/MPI_Init.3.php
crawl-003
refinedweb
1,003
53.92
Context is the implicit owner of objects created when it's active and their properties. More... Context is the implicit owner of objects created when it's active and their properties. This is a top-level class that manages the lifetime and properties of all other API objects. Specifically, it has the following properties: CUcontextinstance - this should permit multi-GPU processing. #include <vpi/Types.h> Parallel for callback function pointer type. A serial (reference) implementation of this function might looks as follows: Parallel implementation should have equivalent behavior; that is, run all tasks between 0 and taskCount-1 and block the calling thread until all tasks have been completed. threadId parameter should be between 0 and maxThreads-1 ( maxThreadsset to zero can pass an arbitrary threadIdvalue to task functions. Definition at line 132 of file Types.h. #include <vpi/Context.h> Create a context instance. #include <vpi/Context.h> Create a context instance that wraps a CUDA context. CUDA operations will all under the wrapped context whenever the created context is active. #include <vpi/Context.h> Destroy a context instance as well as all resources it owns. The context ctx will be destroyed regardless of how many threads it is active in. It is the responsibility of the calling function to ensure that no API call issues using ctx while vpiContextDestroy is executing. If ctx is current to the calling thread, then ctx will also be popped from the current thread's context stack (as though vpiContextPop was called). If ctx is current to other threads, then ctx will remain current to those threads, and attempting to access ctx from those threads will result in the error VPI_ERROR_INVALID_CONTEXT. #include <vpi/Context.h> Gets the context for the calling thread. If there is no error and current context is NULL, the thread is using the global context. #include <vpi/Context.h> Get the current context flags. This function can be used to verify underlying backend support. #include <vpi/Context.h> Returns parameters set by vpiContextSetParallelFor. #include <vpi/Context.h> Pops a context from a per-thread context stack and saves it to the ctx variable. The top of the context stack is set as the current context for the calling thread. #include <vpi/Context.h> Pushes the context to a per-thread context stack and sets this context as the current context for the calling thread. #include <vpi/Context.h> Sets the context for the calling thread. If there are items on the context stack this will replace the top of the stack. If ctx is NULL this will clear the context stack and make the thread use the global context. #include <vpi/Context.h> Controls low-level task parallelism of CPU devices owned by the context. This function allows the user to overload the parallel_for implementation used by CPU devices. Changing this parallel_for implementation on a context that is performing CPU processing is undefined and might lead to application crashes.
https://docs.nvidia.com/vpi/group__VPI__Context.html
CC-MAIN-2021-21
refinedweb
490
56.05
DIFFTIME(3P) POSIX Programmer's Manual DIFFTIME(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. difftime — compute the difference between two calendar time values #include <time.h> double difftime(time_t time1, time_t time0); The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The difftime() function shall compute the difference between two calendar times (as returned by time()): time1− time0. The difftime() function shall return the difference expressed in seconds as a type double. No errors are defined. The following sections are informative. None. None. None. None. asctime(3p), clock(3p), ctime DIFFTIME(3P) Pages that refer to this page: time.h(0p), asctime(3p), clock(3p), ctime(3p), gmtime(3p), localtime(3p), mktime(3p), strftime(3p), time(3p)
http://man7.org/linux/man-pages/man3/difftime.3p.html
CC-MAIN-2017-43
refinedweb
181
50.53
>>. 35 Reader Comments (PS. I'm loving the running gag in the browser tabs every time Ars takes a browser screenshot) There are still things that need to be improved. It's just a question of time before we have hundreds of applications with megabytes of code and gigabytes of data. Caching will need to get way more sofisticated. If JavaScript is to become the intermediate format for any client side code, it will need to grow even speedier and start supporting namespaces and a few more features. It will also need a bigger standard library. I'm actually starting to think we may end up defeating all the retarded plugin abominations like Silverlight, Java and Flash. Sure. People run Direct3D on OpenGL, so the reverse should be quite possible. They will either a) ignore this until IE 12 and do b) or c) b) introduce an incompatible approach c) make it use the same external interface( via javascript ), but use DirectX underneath. Let's hope for at least the latter. In all fairness, Microsoft already provides hardware accelerated 3d graphics in the web browser through Silverlight. There is even Moonlight, an open source implementation. And CIL is possibly a better intermediary language than JavaScript for performance critical applications. That said, Silverlight is, and always will be, full of fail. It locks the cool stuff in a fucking box instead of integrating it into the DOM proper. Just like Java and Flash. Well you can write Javascript on CIL if you want. Right? I assume that Microsoft has some sort of J# thingy. But each browser has their own VM and JIT engine now and that is what you run your software on, of course. But, ya.. the nice thing about webgl and video tags and so on and so forth is that you can integrate it as part of the webpage. The video and 3D stuff is no longer restricted to a plugin that runs on top of the webpage, its part of the html code and can be manipulated and controlled by CSS, html, and javascript just like any other web element. It does not take any special technology or effort to stick a 3D element or video in a web page then it does to stick a jpeg or drop-down combo box in a web page in a html5-land. Damnit kids, get off my lawn! I really wish they'd talk to each other to decide on an intermediary standard while we all wait for the W3C to get off their asses and announce a standard instead tooling around until 2012. It's irritating. I'm reading and keeping up with all the new stuff - but I'm really put off on developing much of anything as there is no cross-browser way of doing things. I was excited at the start of the year when I was able to drop any direct support for IE6 - made coding issues a lot less of a hassle. Now - with all this new CSS3 and HTML 5 stuff being shown off we're right the hell back to having to code for crap several times. What the hell is the point of cross-browser standards - when they only ever mesh up for about 4 months every 10 years? We have also embedded OpenGL into Mozilla Engine (as an XULRunner Application). However, we use an Object Oriented, Drag&Drop, Visual Programming interface not just JavaScript and in a cloud computing environment. It supports particle system, collision detection, skeletal animation, and Cg (C for graphics) that allows you to program video pipeline in real time. Here is a sneak peek: Yes, the editor itself is written completely in HTML and JavaScript. Imaging a standard that does all this cross browsers... it will takes at least another 5-10 years. Let's be fair: these aren't even beta releases. They are in the cutting edge development builds that are nowhere near close to being beta, much less full releases. That said, the WebGL WG is targeting a release of the official spec in the first half of 2010, just as the article says. Both Mozilla and the WebKit team are involved in the WebGL WG, so I'm certain that both browsers will have a compatible implementation by the time the standard is ratified. HTML5 is in the same boat—it is still in draft form, and isn't ready to be made into a complete mainstream standard for some time. A lot of this stuff is still at the cutting edge, so a little patience for it to all come together is probably warranted. In the meantime, though, I'm glad that developers, browser vendors, and interested parties all get a chance to take a look at this stuff while it is in process—if not to marvel at what the future holds, then at least to identify issues before they are codified into a standard that may not be changed for several years. I think it has been explicit since Windows Vista. I believe but don't take it as gospel that aero is turned off in Vista if OpenGL is used until that program quits. I know a lot was said about it when Vista was in late beta and around the release. I don't use vista so I am a bit vague about how exactly it was ended. 3D objects should be HTML entities on the page, a part of the DOM, not corralled into a mini-window, with 3D data compressed, semantically, into a <canvas/>. While the aesthetic results are fine, this is a very bad road to go down. Webkit's CSS-based 3D is really the way to go. This is awful, awful, awful. Browsers have no real hope of implementing even the slightest 3D effects in real time that dedicated engines already do. Hell, browsers take seconds to load simple text and images so lets forget Javascript ever being up to the task of executing shaders and such for 3D objects. In my mind this is a great example of when something shouldn't be natively integrated into the browser. Sure it might work for all these trivial demos that are out there and CSS/Javascript might look great when viewed in the light of these demos but as soon as you want to do real 3D work, the browser container as an engine dramatically breaks down. Further more, the browser as an engine completely fails in translating the input model from 2D browsing to 3D space traversal. Please let's leave 3D in a plugin where it belongs lest we unnecessarily confuse our users in the false name of cleaner code. Cleaner code that only materializes while we're working on trival model viewer 3D examples. I like your deep sense of sarcasm. This is a never-ending argument in computer graphics. Some people like retained-mode, others like immediate-mode. We could have the same argument about SVG vs Canvas 2D. Webkit's CSS 3D Transforms spec is great for adding cool effects to web pages and I think everyone should implement it in addition to WebGL, but it will never have the performance or flexibility for games or other demanding graphics applications. The fact is that immediate-mode always crushes retained-mode for high performance rendering. If you want a Javascript version of Google Earth, Maya, Photoshop, or Crysis, CSS 3D Transforms just aren't enough. Perhaps your position is that you don't want Google Earth or Crysis in your browser, and I can sorta see where you're coming from, but I disagree. The Web is going to continue to evolve until it can support these kinds of applications; it's both desirable and inevitable. I wonder if there is a reason why VRML (or its successor x3d) is not being used .. now that network bandwidth/speed is no longer as bad as it used to be VRML first appeared. Does any one know if WebGL has any advantage over VRML/x3d? Wild Pockets is based on OpenGL and has been released for both Firefox and IE using a proprietary plug-in. This comment was edited by Em742 on September 22, 2009 22:04 WebGL is lower level than VRML. You can use WebGL to implement VRML if you want. VRML is a file format and scene graph (i.e. retained mode rendering API), whereas WebGL is an immediate mode rendering API (OpenGL ES 2.0) with which you can use any file format and/or scene graph you like. WebGL can do animations, lighting, and fancier rendering too; it's just that the Javascript code to do all that hasn't been written yet. This is just the tip of the iceberg; after all WebGL is only what, a month old? In a few months you will be seeing some very impressive demos. No, the real question is why is Ryan Paul reading about it? Actually on the video front I think that's completely cool, write your web page once then drop in various video formats and let the browser decide. The web page then should look the same on everything from an iphone to a netbook to an HD media center PC on broadband with the only difference being the quality of the video presented based on the devices capabilities. As to the webGL stuff I'm fairly confident that the two camps will have a unified approach before it becomes a full standard, heck everyone was able to compromise on xmlhttprequest and that started out life as an ActiveX control. But if that happened, the content you're looking at would be viewable with either OpenGL or Direct3D, right? For a user, you wouldn't need to have OpenGL installed to view the model if you only had Direct3D? And the web designer would only have to code the same content one way, not an OpenGL way and a Direct3D way? Why are people NOT getting this. Proprietary and Plug-ins are not good for the web in general. This is not a solution. The fact that I had to allow this plug-in permissions to install, etc, etc... Is the very reason why this is happening. Exactly. At least the Javascript will be theoretically readable, whereas Flash is closed by default. Here is some links on this issue: "Theoretically readable" is meaningless. If we ever want to move beyond Google-style text search, towards having truly useful knowledge-based applications on the Web, then we can't settle for theoretically readable. Indeed, if more content gets moved into vacuous <canvas> tags and hidden away in Javascript, then even Google will struggle. After all, we know from theory that you cannot always determine if an arbitrary piece of JS will even terminate, let alone the much harder task of trying to determine its purpose and informational content. Sure, the current situation with Flash is not great, but replacing it with essentially the same model in a glass box is no solution. Indeed, the only redeeming feature of Flash at present is precisely the fact that it requires a plugin, which somewhat limits the amount that it is used. It seems to me that WebGL and related technologies threaten to make the situation worse rather than better, and that a real opportunity for improvement is being discarded without any real critical discussion. You must login or create an account to comment.
http://arstechnica.com/information-technology/2009/09/webgl-in-firefox-nightly-builds-demoed-with-3d-spore-model/?comments=1
CC-MAIN-2016-26
refinedweb
1,918
69.11
Basic Image Manipulation in Java Basic Image Manipulation in Java This tutorial illustrates basic image manipulation in Java. We construct a simple Java class that will load a jpeg image from a web site and display it in a frame on the local computer. The code is written as a Java Desktop Application and should scale to other java architecture as well. Our project consists of two classes: - Main - ImageLoaderDemo We provide a simple Main class with a static main() method that serves as the entry point for the application. The main( ) method instantiates an instance of the ImageLoaderDemo class and passes the web address of the image to be processed. The Main class can be removed completely without any impact on the functionality of the ImageLoaderDemo class. The ImageLoaderDemo class inherits from the Java Frame class. In the ImageLoaderDemoconstructor, methods from the Frame are called to create a display a frame. The image is loaded using the awt toolbox with a single call to getImage(): mImage = java.awt.Toolkit.getDefaultToolkit().getImage(url); This method accepts an object of type URL as the only argument. A URL object is easy to create. We simply pass a web address to the URL constructor. We do that in the main( ) method, line 19: new ImageLoaderDemo(new URL(url)); The preceding line calls the constructor of the URL class, which returns a reference to the newly created URL object. That URL object reference is immediately passed on to the ImageLoaderDemo constructor. Java garbage collection prevents objects from being deallocated until they are no longer accessible. Line 120 in the ImageLoaderDemo class finally drops the image onto the frame: g.drawImage(mImage, insets.left, insets.top, this); This line requires a graphics context reference. The graphics context is magically (just kidding) provided by the super class, Frame, and made available in the Paint( ) method in our subclass. We are overriding the default Paint( ) method that our base class would otherwise invoke. Note that nowhere in our code is our Paint( ) method explicitly called. The ImageLoaderDemo class - there's a left/right scroller at the bottom of the capsule. /* * In this project we introduce the awt toolkit: * * * 1. We create a frame. * 2. We create and load an image in the frame. * 3. Eventually the image gets displayed in the frame by a higher power. * * * Adapted by: nicomp: * * This gets a bit confusing but the bottom line is that you need a graphics context object * in order to draw stuff. You only get that from the JVM. See the paint() method below. */ import java.awt.*; import java.awt.image.*; import java.net.URL; /** * Image Loader Demo class * <br>We are inheriting the properties and methods of the frame class. * <br>Since the frame class is our super class, we will have access to a graphics context object. * @author: nicomp */ public class ImageLoaderDemo extends Frame { // Here is where we will store our image. private Image mImage; // We would like to do this with a BufferedImage object, but the awt toolkit can't handle buffered image objects // static private BufferedImage mImage // We will keep track of how many times the image is repainted. int imageUpdateCallCount; /** * Constructor * <br>url = the Universal Resource Locator (web address) of the image to load. * <br>It would be nice to add a listener. Using the Task Manager to close the window is a drag. */ public ImageLoaderDemo(URL url) { super("Image Loader Demo"); // Call the base class constructor (a Frame object) imageUpdateCallCount = 0; // Built-in methods to manipulate images, and do other stuff. // It's called the "Toolkit" and it's part of the awt package. // getImage() can also handle file names on the local network. mImage = java.awt.Toolkit.getDefaultToolkit().getImage(url); rightSize(); } /** * Adjust the image so it will display properly. */ private void rightSize() { int width = mImage.getWidth(this); int height = mImage.getHeight(this); if (width == -1 || height == -1) { return; } // Look up the border size for this image, if any. Insets insets = getInsets(); // Resize the container to handle the borders. setSize(width + insets.left + insets.right, height + insets.top + insets.bottom); // Make the frame visible. setVisible(true); } /** * A call-back method that is referenced by the base class (Frame) * to configure the image. * Note that this method is not called explicitly in this project. * The point is that we would never know this was necessary unless we * studied a working example. * @param img * @param infoflags * @param x * @param y * @param width * @param height * @return */ public boolean imageUpdate(Image img, int infoflags, int x, int y, int width, int height) { // This gets a little odd: we will see that this method is called quite a few times. System.out.println("imageUpdate()" + imageUpdateCallCount++); if ((infoflags & ImageObserver.ERROR) != 0) { System.out.println("Error loading image!"); System.exit(-1); } if ((infoflags & ImageObserver.WIDTH) != 0 && (infoflags & ImageObserver.HEIGHT) != 0) { rightSize(); } if ((infoflags & ImageObserver.SOMEBITS) != 0) { repaint(); // This triggers a call to paint() (see below) } if ((infoflags & ImageObserver.ALLBITS) != 0) { rightSize(); repaint(); // This triggers a call to paint() (see below) return false; } return true; } /** * Ditto. See comments for imageUpdate method * @param g */ // public void update(Graphics g) { // paint(g); // } /** * Called by update() which is called by ... who knows. * Whoever calls this will send us a Graphics Context object to draw onto. * See comments for imageUpdate method. * @param g */ public void paint(Graphics g) { Insets insets = getInsets(); g.drawImage(mImage, insets.left, insets.top, this); } } The Main class that drives the ImageLoaderDemo class /* * Entry point for test main. * */ import java.net.URL; // Convert a string object to a URL object /** * Entry Point for ImageLoaderDemo class. * @author nicomp: */ public class Main { // Entry point for the project. public static void main(String[] args) throws Exception { String url = ""; if (args.length > 0) url = args[0]; // Instantiate an object of type ImageLoaderDemo. // We don't need a handle on it; we just want the constructor of the ImageLoaderDemo class to be called. new ImageLoaderDemo(new URL(url)); // The URL constructor is called too. // The frame persists when the main ends. } } I feel stupider just reading your post. Anyway, I'm sure some people will love this. - Carol thanks Nicomp, intriguing reading. I know for a fact a site built using Java so grateful for the lesson. I'm not sure how far Java as a language has caught on or it's special features or even whether it is a preferred language but sounds like image handling is fairly advanced. hopeless men,i really enjoyed going through your program.although i'm still a baby programmer, i'm still learning and i really want us to be friends so that i can learn better. this's my mail aghasacheal@yahoo.com thanks 7
https://hubpages.com/technology/Basic-Image-Manipulation-in-Java
CC-MAIN-2017-17
refinedweb
1,110
66.44
Post your Comment Hibernate min() Function Hibernate min() Function This section contains an example of the HQL min() function. min() function in HQL returns the smallest value from the selected... keywords as the SQL to be written with the min() function. for example : min JPA Min Function JPA Min Function In this section, you will learn about the min function of JPA... result. JPA Min Function: Query query=em.createQuery( " Hibernate Min() Function (Aggregate Functions) Hibernate Min() Function (Aggregate Functions)  ... the Min() function. Hibernate supports multiple aggregate functions. When... criteria. Following is a aggregate function (min() function how to calculate max and min - RUP function max and min in java. public class MathMaxMin { public static...)); // Method for the min number among the pair of number System.out.println("Method... Thanks Hi friend, In Math class having two function max Min Value Min Value How to access min value of column from database table... min(id) from QuestionModel result_info"; Query query = session.createQuery(selectQuery); System.out.println("Min Value : " +query.list().get(0 MySQL Aggregate Function | +----------+ The 'MIN' function is used to find... MySQL Aggregate Function This example illustrates how use the aggregate function min project with vb - WebSevices min project with vb i have do the min project with VB Frantend and oracle backend JavaScript min method JavaScript min method  ..., secondValue, thirdValue, ..........nValue); This min() method can be used...; it calls the function findMinimum() as we have defined in the JavaScript of our code Using Min, Max and Count on HQL. Using Min, Max and Count on HQL. How to use Min, Max and Count on HQL Hibernate criteria query using Min(). Hibernate criteria query using Min(). How to find minimum value in Hibernate criteria query using Projection? You can find out minimum...); } } } Output: Hibernate: select min(this_.salary) as y0 function function difference between function overloading and operator overloading Mysql Min Max Mysql Min Max Mysql Min Max is useful to find the minimum and maximum records from a table. Understand with Example The Tutorial illustrate an example from Mysql Min Max Java bigdecimal min example Java bigdecimal min example Example below demonstrates the working of bigdecimal class min() method. Java min method analysis the bigdecimal objects value and returns SQL Aggregate Functions List ; SQL Aggregate Functions List describe you the Aggregate Function List Queries. The Aggregate Function include the average, count, min, max, sum etc...; count (id) Function is a aggregate function that return the sum number of records Hibernate Avg() Function (Aggregate Functions) Hibernate Avg() Function (Aggregate Functions)  ... to use the avg() function. Hibernate supports multiple aggregate functions. When...(...), sum(...), min(...), max(...) , count(*), count(...), count(distinct Hibernate Max() Function (Aggregate Functions) Hibernate Max() Function (Aggregate Functions... to use the Max() function. Hibernate supports multiple aggregate functions. When...(...), sum(...), min(...), max(...) , count(*), count(...), count(distinct hibernate criteria Max Min Average Result Example hibernate criteria Max Min Average Result Example In this Example, We... the Projections class max, min ,avg methods. Here is the simple Example code files..."); } if (i == 1) { System.out.print("Min Salary Post your Comment
http://roseindia.net/discussion/17861-JPA-Min-Function.html
CC-MAIN-2015-40
refinedweb
510
50.33
Using libsvm Experiment 1 We'll use NumPy and matplotlib for plotting, so import both: import numpy as np import matplotlib.pyplot as plt The data we want to classify comes from two circles, so first define a function to generate some points on a circle (plus some noise): def circle(radius, sigma=0, num_points=50): t = np.linspace(0, 2*np.pi, num_points) d = np.zeros((num_points,2), dtype=np.float) d[:,0] = radius*np.cos(t) + np.random.randn(t.size)*sigma d[:,1] = radius*np.sin(t) + np.random.randn(t.size)*sigma return d We want to generate 100 training points for each class and 30 points for testing the model, then generate and plot the data: num_train = 100 num_test = 30 sigma = 0.2 d1 = circle(3, sigma, num_train) d2 = circle(5, sigma, num_train) plt.figure() plt.plot(d1[:,0],d1[:,1],'ro') plt.plot(d2[:,0],d2[:,1],'bo') plt.show() So we get back clearly seperated data: Now to the SVM. The LibSVM binding expects a list with the classes and a list with the training data: from svmutil import * # training data c_train = [] c_train.extend([0]*num_train) c_train.extend([1]*num_train) d_train = np.vstack((circle(3,sigma,num_train), circle(5,sigma,num_train))).tolist() # test data c_test = [] c_test.extend([0]*num_test) c_test.extend([1]*num_test) d_test = np.vstack((circle(3,sigma,num_test), circle(5,sigma,num_test))).tolist() problem = svm_problem(c_train,d_train) The parameters for the model have to be defined (supress output in training): param = svm_parameter("-q") # quiet! param.kernel_type=RBF Our model can now be trained as: m = svm_train(problem, param) To get predictions from the model simply pass our test data and the model to svm_predict: pred_lbl, pred_acc, pred_val = svm_predict(c_test,d_test,m) and you'll see, the SVM perfectly classifies the data: >>> p_lbl, p_acc, p_val = svm_predict(c_test,d_test,m) Accuracy = 100% (60/60) (classification) But what if the the training data is not that perfect, set sigma to 1: then the default parameters yield: >>> pred_lbl, pred_acc, pred_val = svm_predict(c_test,d_test,m) Accuracy = 76.6667% (46/60) (classification) Often a Grid Search is applied in order to find the best parameters. A Grid Search is nothing but a bruteforce attempt to check all possible parameter combinations. We can easily do that: results = [] for c in range(-5,10): for g in range(-8,4): param.C, param.gamma = 2**c, 2**g m = svm_train(problem,param) p_lbl, p_acc, p_val = svm_predict(c_test,d_test,m) results.append([param.C, param.gamma, p_acc[0]]) bestIdx = np.argmax(np.array(results)[:,2]) So the best combination has the parameters: >>> results[bestIdx] [0.125, 0.5, 83.333333333333329] and is with 83.3% only slightly better than default. That's it! The next experiment will use some real life data for prediction. Experiment 2 The Wine Dataset is a rather simple dataset. It comes as a CSV file and is made up of 178 lines with a class label and 13 properties. In this experiment I want to show you how to use libsvm with Python. Python comes with a built-in csv reader, so it's easy to read in the data: import csv reader = csv.reader(open('wine.data', 'rb'), delimiter=',') classes = [] data = [] for row in reader: classes.append(int(row[0])) data.append([float(num) for num in row[1:]]) Please note: Always make sure not to include any class labels in your feature vector, because this would make prediction for a SVM trivial and useless. Now that we have read in the data we'll perform a 10-fold cross validation on the raw data. This will show you that the default RBF and the sigmoid kernel don't work on raw data; both perform only slightly better than random guessing. The linear and polynomial kernel function are better suited for raw data with both having around 95% cross-validation accuracy. I've played with the dataset before and we saw that normalization sometimes matters. Here is how to do a min-max or z-score normalization with NumPy: def normalize(X, low=0, high=1, minX=None, maxX=None): X = np.asanyarray(X) if minX is None: minX = np.min(X) if maxX is None: maxX = np.max(X) # Normalize to [0...1]. X = X - minX X = X / (maxX - minX) # Scale to [low...high]. X = X * (high-low) X = X + low return X def zscore(X): X = np.asanyarray(X) return (X-X.mean())/X.std() On normalized data the RBF performs better with 98% accuracy and the sigmoid kernel has 97%. To finally learn a model you have to switch the cross validation off and train the model on your data. With svm_predict you would generate predictions for your input data. You probably ask yourself now how to normalize unseen input data. If you know the range your inputs can take it is trivial, just do a min-max normalization with the given min and max. If your data is drawn from a stationary process you can assume, that mean and variance doesn't change over time. So you could do a z-score normalization on the new data with the mean and standard deviation from your training data. Note that in the wine example all features were measured on a different scale, so you have to normalize each feature with its mean and standard deviation separately. Now let's put all this into a script: from svmutil import * import numpy as np import random import csv def normalize(X, low=0, high=1): X = np.asanyarray(X) minX = np.min(X) maxX = np.max(X) # Normalize to [0...1]. X = X - minX X = X / (maxX - minX) # Scale to [low...high]. X = X * (high-low) X = X + low return X def zscore(X): X = np.asanyarray(X) mean = X.mean() std = X.std() X = (X-mean)/std return X, mean, std reader = csv.reader(open('wine.data', 'rb'), delimiter=',') classes = [] data = [] for row in reader: classes.append(int(row[0])) data.append([float(num) for num in row[1:]]) data = np.asarray(data) classes = np.asarray(classes) # normalize data means = np.zeros((1,data.shape[1])) stds = np.zeros((1,data.shape[1])) for i in xrange(data.shape[1]): data[:,i],means[:,i],stds[:,i] = zscore(data[:,i]) # shuffle data idx = np.argsort([random.random() for i in xrange(len(classes))]) classes = classes[idx] data = data[idx,:] # turn into python lists again classes = classes.tolist() data = data.tolist() # formulate as libsvm problem problem = svm_problem(classes, data) param=svm_parameter("-q") # 10-fold cross validation param.cross_validation=1 param.nr_fold=10 # kernel_type : set type of kernel function (default 2) # 0 -- linear: u'*v # 1 -- polynomial: (gamma*u'*v + coef0)^degree # 2 -- radial basis function: exp(-gamma*|u-v|^2) # 3 -- sigmoid: tanh(gamma*u'*v + coef0) #param.kernel_type=LINEAR # 95% (raw), 96% (zscore) #param.kernel_type=POLY # 96% (raw), 97% (zscore) param.kernel_type=RBF # 43% (raw), 98% (zscore) #param.kernel_type=SIGMOID # 39% (raw), 98% (zscore) # perform validation accuracy = svm_train(problem,param) print accuracy # disable cv param.cross_validation = 0 # training with 80% data trainIdx = int(0.8*len(classes)) problem = svm_problem(classes[0:trainIdx], data[0:trainIdx]) # build svm_model model = svm_train(problem,param) # test with 20% data # if data was not normalized you would do: # data = (data-means)/stds p_lbl, p_acc, p_prob = svm_predict(classes[trainIdx:], data[trainIdx:], model) print p_acc # perform simple grid search #results = [] #for c in range(-3,3): # for g in range(-3,3): # param.C, param.gamma = 2**c, 2**g # acc = svm_train(problem,param) # results.append([param.C, param.gamma, acc]) #bestIdx = np.argmax(np.array(results)[:,2]) #print results[bestIdx]
https://www.bytefish.de/blog/using_libsvm.html
CC-MAIN-2020-40
refinedweb
1,271
58.69
Super quick intro to monads February 28, 2014 Monads tend to come up a lot in functional programming. All you need to know about functional programming for now is that it’s a paradigm in which mutable data is generally discouraged, or even forbidden. Computation generally happens by composing pure functions together. A pure function is one that does nothing but return the result of a computation. It doesn’t print anything, store anything, ask for user input, etc. It just manipulates the data you pass it and returns more data. You may be more familiar with object-oriented programming. Let me translate monads into OOP-speak. Monadis an interface, or an abstract class if you like. It has a constructor and a bindmethod: class Monad: def __init__(self, x): raise NotImplementedError def bind(self, f): raise NotImplementedError xcan be anything. fis a function that takes something of the same type as x(presumably from the monad) and returns a new monad. Lastly, bindalso returns a new monad (often from the result of f). Literally that’s it. Maybe you don’t know what it’s used for, but at least now you’ve seen the interface. Typically, a monad acts as some sort of container, and bindapplies a function to what it contains. Let’s make a monad that stores a single element: class SimpleMonad(Monad): def __init__(self, x): self.x = x def bind(self, f): return f(self.x) To make this a little more concrete, this is how you use it: def double(x): return SimpleMonad(x * 2) def add1(x): return SimpleMonad(x + 1) foo = SimpleMonad(5).bind(double).bind(add1) Okay, so you can chain functions together with bind. But why would we do that? Equivalently, we could have just written add1(double(5)), which is much shorter. The main idea is that monads don’t have to just call the functions you pass to bind. They can change the rules. For example, here is a monad that allows you to chain operations without worrying about null pointer errors. class MaybeMonad(Monad): def __init__(self, x): self.x = x def bind(self, f): if self.x is None: return self else: return f(self.x) If any intermediate computation in a sequence of operations results in None, the entire expression becomes Noneand no further operations are performed (much like how NaNworks in floating-point arithmetic, if you’re familiar with that). If you spend more time with monads, you’ll find that you can also use them to implement exception handling, mutable state, concurrency, and a bunch of other things. Remember: functional programming is all about writing programs by composing functions together. The power of monads is that they allow you to change the rules for how functions are composed. When I was learning about monads, the main things I was confused about were: - It seems like using monads requires you to write a lot of code, e.g., SimpleMonad(5).bind(double).bind(add1)instead of add1(double(5)). Do people really go through all this trouble? - How do I get the value out of the monad? Can I access it like monad.x? - I hear that Haskell doesn’t let you write “impure” functions that have side effects, but I know for a fact that Haskell programs can print to the screen and write to files. What’s the deal? Do monads somehow resolve this? Well, I know the answers now, so let me share the love: - Haskell has a special syntax for monads and Python doesn’t—so that’s why they don’t look very nice in Python. It’s actually quite brilliant. Check it out if you have time. Using it feels very natural and it looks a lot like imperative code. - Some monads provide a “getter” function that allows you to get the value back out. But a monad is not required to have one. For some monads, it doesn’t make sense to take a value out of it. A monad is not required to act like a container. See #3. - Yep—all functions in Haskell are pure. Haskell has a neat trick for doing I/O. You can think of a Haskell program as a pure computation that returns a tree-like data structure, and this tree represents another program that does impure things.Now, building this tree can be done in a purely functional way—as is required in Haskell. But when you “run” a Haskell program, two things happen: 1) your pure Haskell computation is executed, resulting in this tree-like data structure, 2) the tree-program from step 1 (which can print to the screen and write to files) is also executed.It just so happens that a certain monad, especially when combined with Haskell’s special monad syntax, makes it easy to build this tree. This monad is called IOin Haskell. The bind operation does not call the function you pass it, as it did in our monad examples above. Rather, it just stores it in the tree. The runtime will call it when interpreting the tree. That’s the high-level idea. In this post, I implement this idea in Python, which will probably make it a little more clear for you.Lastly, the IOmonad does not have a “getter” function. For example, if you have an IOaction that asks a user for a string of input, you cannot extract the string from the monad. This is because the IOaction doesn’t actually have a string to give you—it only represents a computation that returns a string. When you binda function fto this monad, you’re building a new IOaction that asks the user for a string and then calls fon it. To summarize, monads let you change what it means to compose functions together. You might want to keep track of some extra information, store the functions in a tree, call them in a different order, call them multiple times, skip some functions entirely (based on the value being passed to them), etc. You don’t have to use them, but now you don’t have to fear them either! EDIT: Fixed a typo and added the MaybeMonadexample at /u/wargotad’s suggestion.
https://www.stephanboyer.com/post/83/super-quick-intro-to-monads
CC-MAIN-2017-30
refinedweb
1,044
65.32
- Part 1: Introduction - Part 2: Effects Let's imagine you want to write a function which writes to a file in Node. Here's a simple example: import * as fs from "fs"; function writeHelloWorld(path: string) { fs.writeFileSync(path, "Hello world!"); } writeHelloWorld("my-file.md"); What's the type of our function writeHelloWorld? fs.writeFileSync returns void, so naturally so does writeHelloWorld. But let's think about this for a second. What if we took the same function and made it empty? Like this: function writeHelloWorld(path: string) { // Nothing to see here! } It's got the exact same type (path: string) => void. But of course they do very different things. What the first example does - writing a file - is called an effect. It's certainly not a pure function. If you're familiar with React hooks, this is exactly the same concept as useEffect, which you use for things like fetching data (which is very much an effect). We're going to run on the assumption here that writing pure functions is good, and having side effects is bad. So how could we make writeHelloWorld pure? Is this possible? How about we make it lazy by having writeHelloWorld return a function? import * as fs from "fs"; function writeHelloWorld(path: string) { return () => fs.writeFileSync(path, "Hello world!"); } const runEffects = writeHelloWorld("my-file.md"); Is this code now pure? - no side effects ✅ - same input always produces same output ✅ Seems pure to me! But it doesn't do anything. We also need to add: runEffects(); Only this part of our code is impure, because without that, nothing would happen. Is this cheating? Sort of, but it works. In fact we can make our entire program pure, despite it being full of effects like file writes and random numbers all over the place. So long as every function with effects is lazy, we can combine all of our effects together into one lazy function - like a big red button to run the program: // WARNING: do not call this function unless you're sure what you're doing! function runMyPureProgram() { return () => { writeSomeFiles(); getSomeRandomNumbers(); getTodaysDate(); } } Calling runMyPureProgram() is impure, yet the program itself is pure. But how do we ensure our functions are pure? If I look at getSomeRandomNumbers and it returns () => number[], how do I know if it has an effect or not? If only there was some way the compiler could tell us this... 🤔 Well it's called Type Script for a reason! Let's imagine a new type: type Effect<T> = () => T; What we say is, any impure function with effects must return the type Effect. T is the return value - for example, if our effect was to read a file as a string it could be Effect<string>. We can write: function writeHelloWorld(path: string): Effect<void> { return () => fs.writeFileSync(path, "Hello world!"); } Now, just by looking at the type signature of writeHelloWorld, we know it's going to be doing some effectful things. In this case, T is void since it doesn't return a value. So if I start adding new impure things in my code, TypeScript will complain until I actually handle the effects correctly at the end of the program. Through some lazy functions and a new type called Effect, we've both kept our program pure and we know where the impure effectful things are hiding. 🤯 If you're like me, at this point you might be thinking: "This is great! Wait - what's the point?". It seems like we still have a bunch of impure stuff littered around our codebase, and we've now made the code more awkward than it was before. All in the name of "purity". Let's stop for a second for a quiz. Imagine this contrived example: function funcA(): number { return funcB(); } function funcB(): number { return funcC(); } function funcC(): number { return 5; } We want to change funcC to return a random number. I'll give you three ways we could do this, which one is the 'best'? 1. The impure way: function funcC(): number { return Math.random(); } Easy! Nothing else to change, let's go home now. 2. Dependency injection: function funcA(randomGenerator: () => number): number { return funcB(randomGenerator); } function funcB(randomGenerator: () => number): number { return funcC(randomGenerator); } function funcC(randomGenerator: () => number): number { return randomGenerator(); } Our functions are still nice and pure, but we had to add an argument to them all, which is a lot of typing... 3. Effects function funcA(): Effect<number> { return funcB(); } function funcB(): Effect<number> { return funcC(); } function funcC(): Effect<number> { return () => Math.random(); } Everything's still pure but we had to change the return type of everything to make it lazy. So I'll ask again, which one's the 'best'? I would say typing effects are a blessing and a curse. It's exactly because they require you to change your function signatures you're forced to program in a way where effects are kept to a minimum in your code. And you're more likely to put them all in one place. It's like immutability - it can be more annoying to do, but can provide a cleaner codebase overall. Whether the short-term pain is worth the long-term gain is always difficult to say though. Even if you're now convinced to use the Effect type, this still isn't perfect. There's nothing in TypeScript which forces us to add an Effect type for effects, we just have to be vigilant about it. If you want to be forced to do it, that's when you head to pure a functional language like Haskell. Introducing: IO The fp-ts library already has a type just like Effect, and its name is IO. In fact, its type is exactly as we defined Effect. // taken from source code export interface IO<A> { (): A } The benefit of using IO is it comes with a lot of helper functions for handling effects. Plus, it provides functions for effectful things (like generating random numbers) which are already typed as IO. Here are some simple examples: Add a number Let's start here: import { IO } from "fp-ts/lib/IO"; import { random } from "fp-ts/lib/Random"; function funcA(): IO<number> { return funcB(); } function funcB(): IO<number> { // importing random from fp-ts is the same as // return () => Math.random(); return random; } Imagine we want to add 1 in funcA: function funcA(): IO<number> { return funcB() + 1; } Oops! We get the error: Operator '+' cannot be applied to types 'IO<number>' and 'number'. We could change funcA to this: function funcA(): IO<number> { return () => funcB()() + 1; } But that's annoying and plain ugly. Instead fp-ts gives us io.map: import { IO, io } from "fp-ts/lib/IO"; function funcA(): IO<number> { return io.map(funcB(), (x) => x + 1); } Much like you call .map on an array, you can map something of type IO too. Logging a random number We can use io.chain to combine two effects into a single effect: import { IO, io } from "fp-ts/lib/IO"; import { random } from "fp-ts/lib/Random"; import { log } from "fp-ts/lib/Console"; function logRandomNumber(): IO<void> { return io.chain(random, log); } const program = logRandomNumber(); // this logs a random number when called program(); In this way we can combine all of our effects into one program, which we then call right at the very end. There's much more to learn that we haven't covered (I haven't even mentioned error handling), so do check out these resources if you want to find out more: Discussion (2) Surely at times this would be unnecessary closure allocation? Interesting point! I guess you need to weigh up developer productivity vs performance. In the vast majority of cases I doubt this would add any noticeable difference to performance.
https://dev.to/edbentley/mind-bending-functional-programming-with-typescript-part-2-40c9
CC-MAIN-2021-10
refinedweb
1,297
64.81
I’m struggling with a particular script, I’ve tried googling the problem but I’ve become stuck… I was wondering if anyone could possibly help me out with the SQL formulas. I’ve posted this to Reddits r/python as well, and post any answer found there, here for those who may need it in the future! The current script is here: At the moment, the script reads a sample excel file, reads the value and formatting of all the cells on each sheet and records the values and formatting to an SQLlite3 file. Currently, the script reads all the values and formatting fine, creates a table within SQL for each sheet with no problems (Confirmed this with DB Browser for SQLite), but there are no rows of data within the tables. I think there is a problem with my ‘templateCell()’ function: def templateCell(s,sn,c,r): #__init__ will gather the cell format data cellRef = c + r val = s[cellRef].value fname = s[cellRef].font.name fsize = s[cellRef].font.size fbold = int(s[cellRef].font.bold == 'true') fital = int(s[cellRef].font.italic == 'true') ccolour = s[cellRef].fill.start_color.index print("Cell Reference is: {},".format(cellRef)) c.execute("INSERT INTO {tn} VALUES ({a},{b},{c},{d},{e},{f})".format(tn=sn,a=cellRef,b=val,c=fname,d=fbold,e=fital,f=ccolour)) Or more specifically the c.execute function on line 11, where it is supposed to create the row in the database for each cell. Variables are: s = sheet sn = sheetname (function of sheet) c = column r = row Any ideas? Edit 1: The Excel file is available here: testExcel.xlsx Question: … but there are no rows of data within the tables. You are probably missing a c.commit() after you have done one Sheet. Note: Check out the executemany(sql, seq_of_parameters) function.¶` class sqlite3.Connection commit().
http://w3cgeek.com/python-excel-and-sqllite3.html
CC-MAIN-2019-04
refinedweb
311
58.58
I have this little tool that I hacked together: /** * Write out the public API of all the Java classes of a Jar file. * <p> * To be called as <code>java -jar jardump.jar jarfile</code> when this * program is compiled and packaged (using GenJar) using the associated * Ant build.xml file. It's behavior can be controlled by 3 Java system * properties: * <ul> * <li>verbose: include all members and processing info (defaults to 'false')</li> * <li>quiet: do not output any error messages on System.err (defaults to 'false')</li> * <li>noexit: do not do a System.exit(code) on error or success (defaults to 'false')</li> * </ul> * * @author <a href="mailto:ddevienne@lgc.com">Dominique Devienne</a> * @version Jul 2002 */ public class JarApiWriter { ... } You redirect to a file, doing this for both JARs, and then simply use your favorite Text Diff tool. Requires BCEL and GenJar to compile and package respectively, after which you end up with a stand-alone jardump.jar utility. Contact me directly if you want either the build.xml and JarApiWriter, or directly the jardump.jar (JDK 1.4 compiled). The list would strip a ZIP attachment, and it doesn't deserve to be in BugZilla. --DD PS: I'm sure there must be some tool for this somewhere, or that modern Java IDE can do something like that. -----Original Message----- From: Pellier Marc [mailto:archi7@onssapl.fgov.be] Sent: Friday, April 11, 2003 1:28 AM To: 'Ant Users List' Subject: Test if interface has changed Somebody knows a task whitch compare if two module (jar) have the same interface. In fact I have two versions of the same module. But I don't know if the module had changed. There are only interfaces interface in. May be not a task but a tool. Thanks
http://mail-archives.apache.org/mod_mbox/ant-user/200304.mbox/%3CD44A54C298394F4E967EC8538B1E00F1D2BEA8@lgchexch002.lgc.com%3E
CC-MAIN-2015-22
refinedweb
301
66.33
By Tim Landgrave Applying the principle of the least (PTL) in your architectural plans requires that you ask yourself a simple question whenever you approach any element of the design. That question always starts with "What’s the least amount of ...." Although this principle can be applied throughout the design process, let’s look at three common areas that you should consider using it for. Least amount of code It’s a documented fact that the more lines of original code required to create a system, the more that system costs to create and to maintain. Moreover, as the lines of code in a system increase, so do the potential holes through which security can be breached. There are two keys to successfully minimizing the lines of code in a system while still getting all of the functionality the system requires. First, always look for existing features in the underlying operating system or virtual machine before coding a feature yourself. For example, developers writing systems that run on Windows 2000 without the .NET Framework have to create their own mechanisms for managing threads and thread pools. This lack of a clear methodology often leads to several incompatible and expensive (from a support perspective) implementations of threading models within a development organization. But with threading support built into the .NET Framework, developers can minimize the lines of new code they have to write by taking advantage of the System.Threading namespace. Any architect or developer using the .NET Framework as a development platform owes it to his or her company to become an expert on the capabilities of the Base Class Library (BCL) before designing or implementing a new system. Second, design your own code for reusability. I encourage development organizations to build the PTL into their developers’ mindset by thinking about designing everything for reuse. This requires a new way of thinking for most developers because they don’t think anyone can write better code than they can. But given proper guidance—and the chance to critique and test the code they’re asked to reuse—you can build a PTL attitude when it comes to code development. One of the best places to start is to build some base ASP.NET page classes that all your developers share. These classes can implement common functionality like link-tracking, identity context and skinning or theme support, and can greatly minimize the amount of new code required to generate a new ASP.NET application. Least amount of data necessary System architects can sometimes be lazy. Rather than do the work to decide which particular data elements a particular class requires to do its work, they just pass in the equivalent of a SQL “select *” and throw away what they don’t use. When large numbers of users are added to the system, these inefficiencies become evident because performance suffers. And security breaches become more acute when the amount of data compromised is increased unnecessarily. Applying PTL to your data architecture means that you’ll make two commitments. First, you’ll limit the amount of data passing between objects to the bare minimum required to complete the required operation. And second, you’ll limit the amount of data returned to users to the amount they should have access to at any given point in time and to only that data that they can actually use from a given view. Least amount of security required The other key PTL violation that I commonly see in development shops involves system security. To make their developers more productive, many organizations allow the developers to write code in an administrator environment rather than in a user environment. This is a bad practice for development shops because it leads to sloppiness in testing, deployment, and operations. At a minimum, developers should be required to test their code with a user’s security context before they can check any code into a source code repository or send it to the testing organization. As an architect, you should apply the PTL to your security practices by determining the security requirements for any users or groups that will have access to your application. Work with the operations team to create Active Directory groups that have only the specific privileges required for your application to execute. And architects should create any common user IDs and password required for package security with the same eye toward applying the PTL. Applying PTL across your application design can have dramatic financial consequences for your organization. An application with fewer lines of code will take less time to develop and debug and will be less expensive to maintain. Applications that use data more efficiently will scale better, requiring less equipment and software to support the same number of users. More secure applications means less exposure to viruses and hackers, thereby lowering the overall support cost for deployed applications. Making the PTL a core part of your .NET application architecture will help you reap these benefits for your company.
https://www.techrepublic.com/article/apply-the-architectural-principle-of-the-least-to-your-development-projects/
CC-MAIN-2017-51
refinedweb
832
51.28
adeeb alexander wrote:I have even done what Garrett has said, i used the split() method to split with comma. Just like this str.split(",") but the problem here is after i split i get two string the first is: ferrari car and the second is: apple fruit. Now when i use trim() method, because i need the spaces between ferrari car this doesnt suit my requirement, the trim() method remove all the spaces including which i need public String trim() Returns. Now the problem is that i get a different output if i dont give any space between word and comma. I am getting 1 error i.e 'cannot find symbol method split(java.lang.String)'. Please help me out adeeb alexander wrote:Thanks for your reply Garrett. Can you please tell me where i am going wrong now. I am getting 1 error i.e 'cannot find symbol method split(java.lang.String)'. Please help me out import java.io.*; public class StripSpaces { public static void main(String[] args) { String data = "ferrari car, buick, apple fruit ,banana"; String[] toks = data.split(","); for (int i = 0; i < toks.length; i++) { toks[i] = toks[i].trim(); } String result = mkString(toks, ','); System.out.println(result); } //Might as well extract this into a reusable method public static <T> String mkString(T[] arr, char separator) { //TODO: Implement this! String[] spl=null; for(int i=0;i<arr.length;i++) { System.out.println(arr[i]); spl=arr[i].split("\\s+"); for(int j=0;j<spl.length;j++) { System.out.println(spl[i]); } } return null; } } > Value in B: porsche car Value in C: porsche Value in C: car Value in B: ferrari car Value in C: Value in C: ferrari Value in C: car Value in B: yamaha bike Value in C: Value in C: yamaha Value in C: bike Press any key to continue... more from paul wheaton's glorious empire of web junk: cast iron skillet diatomaceous earth rocket mass heater sepp holzer raised garden beds raising chickens lawn care CFL flea control missoula heat permaculture
http://www.coderanch.com/t/451947/java/java/remove-spaces
crawl-003
refinedweb
344
74.19
PyDocX 0.3.8 docx (OOXML) to html converter pydocx ====== .. image:: :align: left :target: pydocx is a parser that breaks down the elements of a docxfile and converts them into different markup languages. Right now, HTML is supported. Markdown and LaTex will be available soon. You can extend any of the available parsers to customize it to your needs. You can also create your own class that inherits DocxParser to create your own methods for a markup language not yet supported. Currently Supported ################### * tables * nested tables * rowspans * colspans * lists in tables * lists * list styles * nested lists * list of tables * list of pragraphs * justification * images * styles * bold * italics * underline * hyperlinks * headings Usage ##### DocxParser includes abstracts methods that each parser overwrites to satsify its own needs. The abstract methods are as follows: :: class DocxParser: @property def parsed(self): return self._parsed @property def escape(self, text): return text @abstractmethod def linebreak(self): return '' @abstractmethod def paragraph(self, text): return text @abstractmethod def heading(self, text, heading_level): return text @abstractmethod def insertion(self, text, author, date): return text @abstractmethod def hyperlink(self, text, href): return text @abstractmethod def image_handler(self, path): return path @abstractmethod def image(self, path, x, y): return self.image_handler(path) @abstractmethod def deletion(self, text, author, date): return text @abstractmethod def bold(self, text): return text @abstractmethod def italics(self, text): return text @abstractmethod def underline(self, text): return text @abstractmethod def superscript(self, text): return text @abstractmethod def subscript(self, text): return text @abstractmethod def tab(self): return True @abstractmethod def ordered_list(self, text): return text @abstractmethod def unordered_list(self, text): return text @abstractmethod def list_element(self, text): return text @abstractmethod def table(self, text): return text @abstractmethod def table_row(self, text): return text @abstractmethod def table_cell(self, text): return text @abstractmethod def page_break(self): return True @abstractmethod def indent(self, text, left='', right='', firstLine=''): return text Docx2Html inherits DocxParser and implements basic HTML handling. Ex. :: class Docx2Html(DocxParser): # Escape '&', '<', and '>' so we render the HTML correctly def escape(self, text): return xml.sax.saxutils.quoteattr(text)[1:-1] # return a line break def linebreak(self, pre=None): return ' ' # add paragraph tags def paragraph(self, text, pre=None): return ' ' + text + '' However, let's say you want to add a specific style to your HTML document. In order to do this, you want to make each paragraph a class of type `my_implementation`. Simply extend docx2Html and add what you need. :: class My_Implementation_of_Docx2Html(Docx2Html): def paragraph(self, text, pre = None): return + text + '' OR, let's say FOO is your new favorite markup language. Simply customize your own new parser, overwritting the abstract methods of DocxParser :: class Docx2Foo(DocxParser): # because linebreaks in are denoted by '!!!!!!!!!!!!' with the FOO markup langauge :) def linebreak(self): return '!!!!!!!!!!!!' Custom Pre-Processor #################### When creating your own Parser (as described above) you can now add in your own custom Pre Processor. To do so you will need to set the `pre_processor` field on the custom parser, like so: :: class Docx2Foo(DocxParser): pre_processor_class = FooPrePorcessor The `FooPrePorcessor` will need a few things to get you going: :: class FooPrePorcessor(PydocxPrePorcessor): def perform_pre_processing(self, root, *args, **kwargs): super(FooPrePorcessor, self).perform_pre_processing(root, *args, **kwargs) self._set_foo(root) def _set_foo(self, root): pass If you want `_set_foo` to be called you must add it to `perform_pre_processing` which is called in the base parser for pydocx. Everything done during pre-processing is executed prior to `parse` being called for the first time. Styles ###### The base parser `Docx2Html` relies on certain css class being set for certain behaviour to occur. Currently these include: * class `pydocx-insert` -> Turns the text green. * class `pydocx-delete` -> Turns the text red and draws a line through the text. * class `pydocx-center` -> Aligns the text to the center. * class `pydocx-right` -> Aligns the text to the right. * class `pydocx-left` -> Aligns the text to the left. * class `pydocx-comment` -> Turns the text blue. * class `pydocx-underline` -> Underlines the text. * class `pydocx-caps` -> Makes all text uppercase. * class `pydocx-small-caps` -> Makes all text uppercase, however truly lowercase letters will be small than their uppercase counterparts. * class `pydocx-strike` -> Strike a line through. * class `pydocx-hidden` -> Hide the text. Exceptions ########## Right now there is only one custom exception (`MalformedDocxException`). It is raised if either the `xml` or `zipfile` libraries raise an exception. Optional Arguments ################## You can pass in `convert_root_level_upper_roman=True` to the parser and it will convert all root level upper roman lists to headings instead. Changelog ========= * 0.3.8 * If zipfile fails to open the passed in file, we are now raising a `MalformedDocxException` instead of a `BadZipFIle`. * 0.3.7 * Some inline tags (most notably the underline tag) could have a `val` of `none` and that would signify that the style is disabled. A `val` of `none` is now correctly handled. * 0.3.6 * It is possible for a docx file to not contain a `numbering.xml` file but still try to use lists. Now if this happens all lists get converted to paragraphs. * 0.3.5 * Not all docx files contain a `styles.xml` file. We are no longer assuming they do. * 0.3.4 * It is possible for `w:t` tags to have `text` set to `None`. This no longer causes an error when escaping that text. * 0.3.3 * In the event that `cElementTree` has a problem parsing the document, a `MalformedDocxException` is raised instead of a `SyntaxError` * 0.3.2 * We were not taking into account that vertical merges should have a continue attribute, but sometimes they do not, and in those cases word assumes the continue attribute. We updated the parser to handle the cases in which the continue attribute is not there. * We now correctly handle documents with unicode character in the namespace. * In rare cases, some text would be output with a style when it should not have been. This issue has been fixed. * 0.3.1 * Added support for several more OOXML tags including: * caps * smallCaps * strike * dstrike * vanish * webHidden More details in the README. * 0.3.0 * We switched from using stock *xml.etree.ElementTree* to using *xml.etree.cElementTree*. This has resulted in a fairly significant speed increase for python 2.6 * It is now possible to create your own pre processor to do additional pre processing. * Superscripts and subscripts are now extracted correctly. * 0.2.1 * Added a changelog * Added the version in pydocx.__init__ * Fixed an issue with duplicating content if there was indentation or justification on a p element that had multiple t tags. - Downloads (All Versions): - 129 downloads in the last day - 700 downloads in the last week - 816 downloads in the last month - Author: Jason Ward, Sam Portnow - License: BSD - Platform: any - Categories - Development Status :: 3 - Alpha - Intended Audience :: Developers - License :: OSI Approved :: BSD License - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 2 :: Only - Topic :: Text Processing :: Markup :: HTML - Topic :: Text Processing :: Markup :: XML - Package Index Owner: jason.ward, kylegibson - DOAP record: PyDocX-0.3.8.xml
https://pypi.python.org/pypi/PyDocX/0.3.8
CC-MAIN-2015-11
refinedweb
1,167
54.93
A format string vulnerability in tool used to help build SQLite's TCL extension on Windows (1.1) By salmonx on 2021-06-15 11:30:34 edited from 1.0 [link] [source] The 'printf' in SubstituteFile accepts a format string as an argument, but the format string originates from the argv[3] which is controlled by user. static int SubstituteFile( const char *substitutions, const char *filename) { ... fp = fopen(filename, "rt"); if (fp != NULL) { while (fgets(szBuffer, cbBuffer, fp) != NULL) { list_item_t *p = NULL; for (p = substPtr; p != NULL; p = p->nextPtr) { char *m = strstr(szBuffer, p->key); if (m) { char *cp, *op, *sp; cp = szCopy; op = szBuffer; while (op != m) *cp++ = *op++; sp = p->value; while (sp && *sp) *cp++ = *sp++; op += strlen(p->key); while (*op) *cp++ = *op++; *cp = 0; memcpy(szBuffer, szCopy, sizeof(szCopy)); } } printf(szBuffer); // Vulnerability } list_free(&substPtr); } fclose(fp); return 0; } (2) By Larry Brasfield (larrybr) on 2021-06-15 13:36:10 in reply to 1.1 [link] [source] Thanks for keeping an eye out for bugs and vulnerabilities. And please do not let the following explanation discourage such efforts. We do not consider it a problem when somebody who builds the SQLite library has the power to sabotage their results. The tool is normally used from a process governed by a Makefile and gets sensible arguments. That it might do something bizarre when invoked bizarrely is of no consequence to the project. And, as Richard mentioned elsewhere today, the tool is not authored as part of the project; it is merely reused from an outside source. So it is not the project's concern. (3) By Jan Nijtmans (jan.nijtmans) on 2021-07-01 06:36:08 in reply to 1.1 [link] [source] This has been assigned a CVE number: CVE-2021-35331 It's fixed in this commit: And also updated in sampleextension: (4) By Jan Nijtmans (jan.nijtmans) on 2021-07-06 08:45:34 in reply to 3 [link] [source] (5) By Larry Brasfield (larrybr) on 2021-07-06 11:15:37 in reply to 4 [link] [source] (Replying only after calming myself on this "CVE".) In addition to being of zero (non-political) consequence to the SQLite project, the revised nmakehelp.c from its source project has now replaced that "buggy" version. See check-in 595bf95. So, it's fixed. BTW, all C compilers should have a CVE because they will compile: #include <stdlib.h> int main(){ return *((int *)rand()); } (6) By Scott Robison (casaderobison) on 2021-07-06 18:28:19 in reply to 5 [source] +1
https://sqlite.org/forum/info/40a5b329843ec3ba
CC-MAIN-2022-27
refinedweb
428
73.78
The Listbox widget is a standard Tkinter widget used to display a list of alternatives. The listbox can only contain text items, and all items must have the same font and color. Depending on the widget configuration, the user can choose one or more alternatives from the list. When to use the Listbox Widget Listboxes are used to select from a group of textual items. Depending on how the listbox is configured, the user can select one or many items from that list. Patterns # When you first create the listbox, it is empty. The first thing to do is usually to insert one or more lines of text. The insert method takes an index and a string to insert. The index is usually an item number (0 for the first item in the list), but you can also use some special indexes, including ACTIVE, which refers to the “active” item (set when you click on an item, or by the arrow keys), and END, which is used to append items to the list. from Tkinter import * master = Tk() listbox = Listbox(master) listbox.pack() listbox.insert(END, "a list entry") for item in ["one", "two", "three", "four"]: listbox.insert(END, item) mainloop() To remove items from the list, use the delete method. The most common operation is to delete all items in the list (something you often need to do when updating the list). listbox.delete(0, END) listbox.insert(END, newitem) You can also delete individual items. In the following example, a separate button is used to delete the ACTIVE item from a list. lb = Listbox(master) b = Button(master, text="Delete", command=lambda lb=lb: lb.delete(ANCHOR)) The listbox offers four different selection modes through the selectmode option. These are SINGLE (just a single choice), BROWSE (same, but the selection can be moved using the mouse), MULTIPLE (multiple item can be choosen, by clicking at them one at a time), or EXTENDED (multiple ranges of items can be chosen, using the Shift and Control keyboard modifiers). The default is BROWSE. Use MULTIPLE to get “checklist” behavior, and EXTENDED when the user would usually pick only one item, but sometimes would like to select one or more ranges of items. lb = Listbox(selectmode=EXTENDED) To query the selection, use curselection method. It returns a list of item indexes, but a bug in Tkinter 1.160 (Python 2.2) and earlier versions causes this list to be returned as a list of strings, instead of integers. This may be fixed in later versions of Tkinter, so you should make sure that your code is written to handle either case. Here’s one way to do that: items = map(int, list.curselection()) In versions before Python 1.5, use string.atoi instead of int. Use the get method to get the list item corresponding to a given index. Note that get accepts either strings or integers, so you don’t have to convert the indexes to integers if all you’re going to do is to pass them to get. You can also use a listbox to represent arbitrary Python objects. In the next example, we assume that the input data is represented as a list of tuples, where the first item in each tuple is the string to display in the list. For example, you could display a dictionary by using the items method to get such a list. self.lb.delete(0, END) # clear for key, value in data: self.lb.insert(END, key) self.data = data When querying the list, simply fetch the items from the data attribute, using the selection as an index: items = self.lb.curselection() items = [self.data[int(item)] for item in items] In earlier versions of Python, you can use this instead: items = self.lb.curselection() try: items = map(int, items) except ValueError: pass items = map(lambda i,d=self.data: d[i], items) Unfortunately, the listbox doesn’t provide a command option allowing you to track changes to the selection. The standard solution is to bind a double-click event to the same callback as the OK (or Select, or whatever) button. This allows the user to either select an alternative as usual, and click OK to carry out the operation, or to select and carry out the operation in one go by double-clicking on an alternative. This solution works best in BROWSE and EXTENDED modes. lb.bind("<Double-Button-1>", self.ok) FIXME: show how to use bindtags to insert custom bindings after the standard bindings FIXME: show how to use custom events in later versions of Tkinter If you wish to track arbitrary changes to the selection, you can either rebind the whole bunch of selection related events (see the Tk manual pages for a complete list of Listbox event bindings), or, much easier, poll the list using a timer: class Dialog(Frame): def __init__(self, master): Frame.__init__(self, master) self.list = Listbox(self, selectmode=EXTENDED) self.list.pack(fill=BOTH, expand=1) self.current = None self.poll() # start polling the list def poll(self): now = self.list.curselection() if now != self.current: self.list_has_changed(now) self.current = now self.after(250, self.poll) def list_has_changed(self, selection): print "selection is", selection By default, the selection is exported via the X selection mechanism (or the clipboard, on Windows). If you have more than one listbox on the screen, this really messes things up for the poor user. If she selects something in one listbox, and then selects something in another, the original selection disappears. It is usually a good idea to disable this mechanism in such cases. In the following example, three listboxes are used in the same dialog: b1 = Listbox(exportselection=0) for item in families: b1.insert(END, item) b2 = Listbox(exportselection=0) for item in fonts: b2.insert(END, item) b3 = Listbox(exportselection=0) for item in styles: b3.insert(END, item) The listbox itself doesn’t include a scrollbar. Attaching a scrollbar is pretty straightforward. Simply set the xscrollcommand and yscrollcommand options of the listbox to the set method of the corresponding scrollbar, and the command options of the scrollbars to the corresponding xview and yview methods in the listbox. Also remember to pack the scrollbars before the listbox. In the following example, only a vertical scrollbar is used. For more examples, see pattern section in the Scrollbar description. frame = Frame(master) scrollbar = Scrollbar(frame, orient=VERTICAL) listbox = Listbox(frame, yscrollcommand=scrollbar.set) scrollbar.config(command=listbox.yview) scrollbar.pack(side=RIGHT, fill=Y) listbox.pack(side=LEFT, fill=BOTH, expand=1) With some more trickery, you can use a single vertical scrollbar to scroll several lists in parallel. This assumes that all lists have the same number of items. Also note how the widgets are packed in the following example. def __init__(self, master): scrollbar = Scrollbar(master, orient=VERTICAL) self.b1 = Listbox(master, yscrollcommand=scrollbar.set) self.b2 = Listbox(master, yscrollcommand=scrollbar.set) scrollbar.config(command=self.yview) scrollbar.pack(side=RIGHT, fill=Y) self.b1.pack(side=LEFT, fill=BOTH, expand=1) self.b2.pack(side=LEFT, fill=BOTH, expand=1) def yview(self, *args): apply(self.b1.yview, args) apply(self.b2.yview, args) Reference # -). [comment on/vote for this article] Comment: According to the Tk portion of the Tcl manual page, the virtual event <<ListboxSelect>> is sent when the selection is changed in a listbox, thereby making this work-around unnecessary. According to the Tcl Wiki; however, this occured in Tk 8.1, so I don't know if it was carried over to Tkinter. Posted by Chris (via mail) (2006-10-24)
http://sandbox.effbot.org/tkinterbook/listbox.htm
crawl-003
refinedweb
1,273
57.87
address@hidden (Tomas Ebenlendr) writes: > Okay, here is the patch without dependency symbol. (I hope it is GCS > compliant now.) Great. I have still some questions about the patch. How do both loaders (that is both in normal mode and rescue mode) find the generic chainloader? If I understand it correctly, the chain.mod depends on _chain.mod, but how does this work? Does it load _normal.mod and use those symbols? Or does it really share the code? (so there is not dependency on the rescue mode chainloader at all, even when it is not part of core.mod). Perhaps it would be better to just use one sourcecode file, chainloader.c, like this: #ifdef GRUB_NORMAL_MODE the normalmode code #else rescue mode code #endif And just put -DGRUB_NORMAL_MODE in the CFLAGS when compiling the normal mode module. That will make it possible to use one sourcefile for both modes. Perhaps I just misunderstood your patch, if that is the case, please tell me. Next time I try to read your entire patch a bit earlier. Same for the other patches, I try to apply them (after cleaning them up a bit) this weekend. Now I am still busy with school, it is taking longer than I have expected. I hope you understand. Thanks, Marco
https://lists.gnu.org/archive/html/grub-devel/2004-06/msg00095.html
CC-MAIN-2022-27
refinedweb
215
77.23
Ganymede is now at 1.04, with some nice improvements to the GUI, a fix to an exception thrown if a user gives a bad username/password pair to the xml client and, most importantly, a fix to the schema editor to make it check for the effect on the namespace constraint indices during schema editing. I've been thinking some about how to make Ganymede support an object containment hierarchy, and it seems that the way to do that would be to make the Ganymede schema more restrictive, rather than less. Right now, a single object may be owned by more than one owner group, which makes a strict tree ownership structure impossible. Making objects only able to have one owner would make it possible to have objects structured in a proper tree, which might be quite nice for the client. Ganymede effectively allows arbitrary object graphs to be implemented using object pointers, but some people seem to be freaked out that they can't stuff things inside other things and make the client display things that way. I've also been thinking about how to make the Ganymede server more scalable. SleepyCat's Berkeley DB libraries look nice, but the Ganymede server was designed with heavy multithreading in mind, and is in many ways dependent on a lot of the flexibility and insane concurrency that the RAM-based database provides. As with the schema containment hierarchy issue, it seems that the way to move Ganymede forward may be to actually reduce some of the degrees of flexibility, which I find an interesting outcome.
http://www.advogato.org/person/jonabbey/diary/13.html
CC-MAIN-2014-15
refinedweb
265
59.26
Hello. Does anyone knows how i can adjust the output console's result a little bit more to the right side?Is there any file which i can edit? I don't know maybe there's a margin or something like that... This is how it is currently: And this is i'd like to get: Thanks in advance! I don't know in general. If you just want to do this for a specific result of an exec command, you can apply a custom syntax to the result by defining a 'syntax' element to the args dictionary passed to the exec command. Then you can have custom syntax settings for that syntax where you set the margin setting. For build systems, I don't know if they can also get a 'syntax' argument. I assume they just pass the arguments to the exec command. According to this post: You can define syntax highlighting for the output panel using Packages/Default/Plain text.tmLanguage and then apply a theme to format that output text. Packages/Default/Plain text.tmLanguage Thank you for the response and I'm sorry for the delay, I have had a problem with my linux and I'm still fixing it. Ok, I don't get it very well, I found this xml file but I don't know what I should to do. Can you explain me please? Actually, I'm new on sublime text, so I saw the post that you pointed, but with my current experience, I still can't get that, sorry about it. Thank you. Sorry I think I wasn't been clear about my issue. I don't wanna change syntax highlight, I just wanna insert a kind of margin or space where console's output result begin. The console's output text shows up very near to the left side of sublime text GUI and that's cutting up the exec result. As you guys can see: I inserted manually a space here to gave you guys an idea of how i'd like to get it: Thank you! There is the "margin" setting for a view, as gotten from view.settings().get("margin"), this is what you'd want to change in the panel view, but I have no idea how. Thank you I gonna try to find that. you can apply it for all widgets in Widget.sublime-settings Widget.sublime-settings I found this json file, and this is what I have. { "rulers": [], "translate_tabs_to_spaces": false, "gutter": false, "margin": 1, "syntax": "Packages/Text/Plain text.tmLanguage", "is_widget": true, "word_wrap": false, "auto_match_enabled": false, "scroll_past_end": false, "draw_indent_guides": false, "draw_centered": false, "auto_complete": false, "match_selection": false } I changed margin element and it set the regex search bar only, I I'll keep searching and trying.Do you know if is there a console element I should insert to do that? I just tried changing widget margin and it worked for me. What theme are you using? The Widget.sublime-settings you have to edit is the one of your active theme, it should stay in the same folder. I'm using spacegray eighties theme, and I tried changing Widget - Spacegray Eighties.sublime-settings inserting a margin element, and it also doesn't affected output panel. It's weird... Take a look: `{ "color_scheme": "Packages/Theme - Spacegray/widgets/Widget - Spacegray Eighties.stTheme", "draw_shadows": false, // Margin I have inserted "margin": 6 }` I'm also using Anaconda plugin, and I found this: /* Theme to use in the output panel. Uncomment the line below to override the default tests runner output, by default the theme is PythonConsoleDark.hidden-tmTheme NOTE: The file specified here **MUST** exists in `Packages/Anaconda` */ // "test_runner_theme": "PythonConsoleDark.hidden-tmTheme", Maybe I should to do that, putting base16-eighties.dark.tmTheme in Packages/Anaconda and to try to override that in anaconda settings file. I'll try. There's a bit of confusion between themes and color schemes. tmTheme files are actually a color scheme. Themes are the ones with the extension "sublime-theme". I tried to do what you said you did with your theme, modified Widget - Spacegray Eighties.sublime-settings and it worked for the console, but not for the build panel. I think because it's not considered a widget, and I think you need a plugin to do this. Try this: import sublime import sublime_plugin class BuildOutputListener(sublime_plugin.EventListener): def on_activated(self, view): if view == view.window().find_output_panel("exec"): view.settings().set('margin', 15) It should set the margin to 15 when you build and click on the build panel the first time, then it should stay like that until you close ST. You can also do this if you want to open the build view in a normal view after clicking on the build panel: on_activated(self, view): if view == view.window().find_output_panel("exec"): view.window().run_command('build_to_view') view.settings().set('margin', 5) You can change the color scheme of the build view by editing the line: scheme = 'Packages/Anaconda/PythonConsoleDark.hidden-tmTheme' It worked like a charm! Thank you! Actually I also used your code idea, and inserted this line in exec.py: `self.output_view.settings().set("margin", 10)` It's just to I don't needed clicking on the build panel all the time.If you have any other suggestion of how I can do that like plugin, it will be welcome. Thanks for your patience. You can do this (a variant of the above): set_margin(self, window): if window: v = window.find_output_panel("exec") if v: v.settings().set('margin', 5) def on_text_command(self, view, command_name, args): if view == view.window().find_output_panel("exec"): if command_name == "drag_select" and 'by' in args and \ args['by'] == 'words': view.window().run_command('build_to_view') def on_window_command(self, window, command_name, args): if command_name == "build": sublime.set_timeout_async(lambda: self.set_margin(window)) This detects the build command and sets the margin. Double click on the panel calls the command of the other post(if you like it, otherwise delete def on_text_command. def on_text_command It's worked! Thanks again for your help. That's gonna help me two times cause I'm studying Python, I hope to improve soon and got Python better. See ya.
https://forum.sublimetext.com/t/is-there-a-way-to-adjust-output-consoles-result/32370
CC-MAIN-2018-13
refinedweb
1,029
65.52
I used a raspbian pisces R2 wheezy image () apt-get updated and upgraded. I installed the packages required following the installation instructions for Ubuntu at. I installed the development version (1.3.0) from github. Then I modified the setup.py file so that it would find the Broadcom files in /opt/vc before looking in /usr. Eg replacing lines such as - Code: Select all default_header_dirs = ['/usr/include', '/usr/local/include'] with - Code: Select all default_header_dirs = ['/opt/vc/include', '/usr/include', '/usr/local/include'] I then compiled and installed Kivy using checkinstall to make removal easier and cleaner - Code: Select all checkinstall python setup.py install This simple Hello World runs from a terminal window in X: - Code: Select all from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): return Button(text='Hello World') TestApp().run() The full-window button turns blue a full second after the mouse click. Is this an issue with X, the drivers, my approach or have I missed something? At least it seems possible that Python Kivy will be usable eventually.
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=68&p=102469
CC-MAIN-2013-20
refinedweb
184
58.89
24752/how-to-access-hive-via-python appears to be outdated. When I add this to /etc/profile: export PYTHONPATH=$PYTHONPATH:/usr/lib/hive/lib/py I can then do the imports as listed in the link, with the exception of from hive import ThriftHivewhich actually need to be: from hive_service import ThriftHive Next the port in the example was 10000, which when I tried caused the program to hang. The default Hive Thrift port is 9083, which stopped the hanging. So I set it up like so: from thrift import Thrift from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol try: transport = TSocket.TSocket('<node-with-metastore>', 9083) transport = TTransport.TBufferedTransport(transport) protocol = TBinaryProtocol.TBinaryProtocol(transport) client = ThriftHive.Client(protocol) transport.open() client.execute("CREATE TABLE test(c1 int)") transport.close() except Thrift.TException, tx: print '%s' % (tx.message) I received the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/hive/lib/py/hive_service/ThriftHive.py", line 68, in execute self.recv_execute() File "/usr/lib/hive/lib/py/hive_service/ThriftHive.py", line 84, in recv_execute raise x thrift.Thrift.TApplicationException: Invalid method name: 'execute' But inspecting the ThriftHive.py file reveals the method execute within the Client class. How may I use Python to access Hive? The easiest way is to use PyHive. To install you'll need these libraries: pip install sasl pip install thrift pip install thrift-sasl pip install PyHive After installation, you can connect to Hive like this: from pyhive import hive conn = hive.Connection(host="YOUR_HIVE_HOST", port=PORT, username="YOU") Now that you have the hive connection, you have options how to use it. You can just straight-up query: cursor = conn.cursor() cursor.execute("SELECT cool_stuff FROM hive_table") for result in cursor.fetchall(): use_result(result) ...or to use the connection to make a Pandas dataframe: import pandas as pd df = pd.read_sql("SELECT cool_stuff FROM hive_table", conn) Hope this is helpful! To know more join Master in Python programming course today. Thanks! You need to set this property in your hive-site.xml file. <property> <name>hive.server2.authentication</name> <value>NOSASL</value> </property> Impala provides faster response as it uses MPP(massively ...READ MORE Okay,here's the code snippet to work in the ...READ MORE Well, what you can do is use ...READ MORE JDBC is not required here. Create a hive ...READ MORE Firstly you need to understand the concept ...READ MORE Hi, You can create one directory in HDFS ...READ MORE In your case there is no difference ...READ MORE The distributed copy command, distcp, is a ...READ MORE hash_id, COLLECT_LIST(num_of_cats) AS ...READ MORE It is now possible to insert like ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/24752/how-to-access-hive-via-python?show=72943
CC-MAIN-2022-27
refinedweb
483
61.53
Get the highlights in your inbox every week. How 2018 learned to do the cloud right How 2018 learned to do the cloud right Forecast: Cloudy with a chance of more clouds. Subscribe now Over the last five years, I've worked in various roles bringing high-performance computing (HPC) into the public cloud. It's been fascinating to watch the industry change from saying "You can't do HPC in the cloud" to "How can I do HPC in the cloud?" to "Why wouldn't I do HPC in the cloud?" Of course, HPC is just one narrow slice of the tech sector, but you can see that same pattern (albeit on different timelines) across the industry. In 2018, it's pretty clear that the cloud model—whether public, private, or hybrid—is the way to go. You can see that reflected in the content on Opensource.com, too. Last year, editor Jason Baker recapped eight resources for understanding the open source cloud. The articles from 2017 largely center on explaining cloud concepts and making a case for having your head in the clouds. In 2018, we see a shift toward that second phase: articles focused on the how because the why is already assumed. Let's take a look at some of the highlights from this year in the open source cloud. How to connect to a remote desktop from Linux The thing about cloud-based desktops is that you can't exactly plug your keyboard and monitor into them. There are a lot of reasons you might want to run a desktop in a cloud environment, but how do you connect to it? Kedar Vijay Kulkarni wrote a step-by-step guide to using Remmina for remote desktop. With support for both VNC (used by Linux) and RDP (used by Windows), Remmina is a great tool for connecting to desktops in the cloud or anywhere else. A sysadmin's guide to containers One way to take better advantage of the scale and flexibility of the cloud is to use containers. Containers provide a convenient way to package and isolate applications on a host without the overhead of full virtualization. But it helps to understand how they work before you try to use them. Dan Walsh takes a look at containers from a sysadmin's point of view. In this article, he examines process namespaces, storage, registries, and more. 7 open source platforms to get started with serverless computing Containers are nice and all, but what if you got even more abstract? "Serverless" computing still requires servers, of course, but you no longer have to care about them. While cloud service providers such as Amazon Web Services and Microsoft Azure provide their own serverless (also called "Functions-as-a-Service") offerings, you can also create a serverless environment on your own, er, servers. In this article, Daniel Oh introduces seven open source serverless platforms and explains how to get started. Host your own cloud with Raspberry Pi NAS Part of the decision to use a public cloud provider is the belief that they can run the service better than you can. That may mean cheaper, more secure, faster upgrade cycles, or anything else that makes sense for your needs. But sometimes you might want or need to run your own private cloud service. In that case, there's Nextcloud and a Raspberry Pi. This article caps off a three-part series by Manuel Dewald that takes you through getting the hardware and software set up to turn a Raspberry Pi into network-attached storage (NAS), configuring backups, and finally installing Nextcloud. An introduction to Ansible Operators in Kubernetes One of the great things about containers is they make it easy to quickly scale out many instances of a service (for example, a fleet of web frontends for when your project gets a big spike in traffic after an Opensource.com article). Of course, that leads to the problem of how you manage those containers. Kubernetes is a leading choice for container orchestration; its Operators feature deploys and manages a service or application. In this article, Michael Hrivnak introduces Ansible to build a Kubernets operator. How Kubernetes became the solution for migrating legacy applications Containers may be how applications are developed these days, but what about the legacy applications you still depend on? They're often monolithic, proprietary, and surrounded by years or decades of cruft. Swapnil Bhartiya suggests putting those applications in a container and building new functionality around them. Over time, you can begin tearing down the monoliths in favor of modern designs. What are cloud-native applications? The modern notion of cloud computing has been around for over a decade. While mainstream adoption isn't quite that old, it's been around long enough that practitioners have started to figure out how to best run services in the cloud. But more than that, they've figured out the best way to design those services with a cloud model in mind. Gordon Haff says cloud-native means the "intersection of containerized infrastructure and applications composed using fine-grained API-driven services, aka, microservices." While this answers the immediate question, it's worth reading the rest of the article for the historical context and how this definition applies to modern applications. Getting started with openmediavault: A home NAS solution The days of a single computer shared by the family are long gone. These days, you're likely to have several computers, where computer means any device that can access digital files. Maybe you have a desktop and a laptop, a tablet or two, your phone, your set-top streaming box, your video game system, et cetera, et cetera. This means you'll want a place to centrally store files for sharing and backup. Community Moderator Jason van Gumster introduces openmediavault and describes how he set it up to be a network-attached storage (NAS) solution in his home. There's no rest for the wicked, but the modern computing landscape has a lot of REST services. If you want to build middleware to connect your REST services, Apache Camel is one option to help you achieve that goal. Mary Cochran and Krystal Ying shared their tips for getting started based on their poster at this year's Grace Hopper Celebration of Women in Computing. Running integration tests in Kubernetes Code tests—you do that, right? Of course you do! But doing integration tests to ensure external operations work is challenging. You want a fresh environment every time, but you need it quickly and you want it to be identical between tests. Using Kubernetes to manage your test pipeline is a great way to meet those goals. Balazs Szeti gives a thorough tutorial for creating a Jenkins build environment on OpenShift.
https://opensource.com/article/19/1/best-of-cloud
CC-MAIN-2019-39
refinedweb
1,132
60.85
Its all of the functions I know so far. Printable View Its all of the functions I know so far. Well, after fixing the syntax errors... The program itself is okay I suppose, I'd recommend psychiatric help, but otherwise it's okay ;) Keep in mind that any use of conio is nonstandard and may not work on all systems. Your using expressions will not work, both because they are invalid as well as the fact that you are using the prestandard iostream. Namespaces were added after the prestandard iostream was replaced in support of the new one. A better way to do this would have simply been: #include <iostream> using namepsace std; Global variables are evil, avoid them if possible and use them only when absolutely necessary without any alternatives. Data hiding is one of the biggest attributes of C++, use it. I found your style a bit hard to follow at first, but that was only temporary. To give the reader of your code a better sense of control flow you might want to consider either switch statements or if...else if clauses instead of the mass of single ifs you have. On line 44 (after my original include changes) there was an error for missing a ; after the string. This was because you had a string variable followed by a string literal without the correct operator to separate them cout<<"\n\nGreat "<<frtname<<". What college will you be going to "<<frtname /*error here, missing <<*/ "? "; Also, you use the nonstandard functin getch() where a standard call to getchar() would serve just as well and without any noticeable difference. Just a quick note on the logic, for some reason the program thinks I'm a teenager when I enter 33...that may be a problem. -Prelude > I'm a teenager when I enter 33 Also, you're not when you're 13. my apoligies I wasn't really done with it though.............
https://cboard.cprogramming.com/cplusplus-programming/9349-who-likes-my-program-printable-thread.html
CC-MAIN-2017-30
refinedweb
323
71.75
With C++20, the chrono library from C++11 receives important additions. The most prominent ones are a calendar and time-zones support. But this is by far not all. C++20 gets new clocks, powerful formatting functionality for time durations, and a time of day type. Before I dive into the extended chrono library and, in particular, in this post into the new type std::chrono::time_of_day, I have to make a few remarks. In short, I call all date and time utilities provided by chrono the time library. To get the most out of this post, a basic understanding of the Chrono library is essential. C++11 introduced three main components to deal with time: Honestly, time is for me a mysterium. On one hand, each of us has an intuitive idea of time, on the other hand, defining it formally is extremely challenging. For example, the three components time point, time duration, and clock depend on each other. If you want to know more about the time functionality in C++11, read my previous posts: time. chrono C++20 adds new components to the chrono library: Essentially, the time-zone functionality (C++20) is based on the calendar functionality (C++20) and the calendar functionality (C++20) is based on the original chrono functionality (C++11). But that is not all. The extension includes new clocks. Thanks to the formatting library in C++20, time durations can be read or written. date While writing this post, no C++ compiler supports the chrono extensions in C++20 so far. Thanks to the prototype library date from Howard Hinnant, which is essentially a superset of the extended time functionality in C++20, I can experiment with it. The library is hosted on GitHub. There are various ways to use the date prototype: date.h std::time_of_day I made my first steps with the online compiler wandbox but switched to the strategy 2. In general, a C++14 compiler is sufficient to use the date library. There is one exception to this rule which I experienced with the following call: auto timeOfDay = date::time_of_day(10.h + 98min + 2020s + 0.5s); // C++20 auto timeOfDay = date::hh_mm_ss(10.h + 98min + 2020s + 0.5s); // C++17 I used in my first tries the first call. The first line requires class template argument deduction for alias templates. time_of_day is an alias for hh_mm_ss: using time_of_day_day = hh_mm_ss<Duration>. When you now replace the alias with the class template such as in the second line (C++17), you need a C++17. Class template argument deduction is a C++17. time_of_day hh_mm_ss: using time_of_day_day = hh_mm_ss<Duration>. Read more details to class template argument deduction here: C++17: What's new in the Core Language? With C++20 time_of_day was renamed to hh_mm_ss. Howard Hinnant, the creator of the date library and designer of the chrono addition, gave me the crucial hint: "Prefer to use hh_mm_ss in place of time_of_day. time_of_day got renamed to hh_mm_ss during the standardization process for C++20, and so time_of_day remains strictly as a backwards-compatible shim for current and past users of this lib." hh_mm_ss. hh_mm_ss time_of_day This is typical Odysses you have when you are an early adopter. When you only use the content of the date prototype that is part of C++20, porting it to C++ 20 is no big deal. Replace the date header files with the header file <chrono> and the namespace date with the namespace std::chrono: <chrono> std::chrono #include "date.h" //#include <chrono> int main() { using namespace std::chrono_literals; auto am = date::is_am(10h); // auto am = std::chrono::is_am(10h); } Now, I write about the chrono extensions in C++20. std::chrono::hh_mm_ss is the duration since midnight split into hours, minutes, seconds, and fractional seconds. This type is typically used as a formatting tool. First, the following table gives you a concise overview of std::chrono::hh_mm_ss instance tOfDay. std::chrono::hh_mm_ss tOfDay. The following program uses the functions: // timeOfDay.cpp #include "date.h" #include <iostream> int main() { using namespace date; // (3) using namespace std::chrono_literals; std::cout << std::boolalpha << std::endl; auto timeOfDay = date::hh_mm_ss(10.5h + 98min + 2020s + 0.5s); // (1) std::cout<< "timeOfDay: " << timeOfDay << std::endl; // (2) std::cout << std::endl; std::cout << "timeOfDay.hours(): " << timeOfDay.hours() << std::endl; // (4) std::cout << "timeOfDay.minutes(): " << timeOfDay.minutes() << std::endl; // (4) std::cout << "timeOfDay.seconds(): " << timeOfDay.seconds() << std::endl; // (4) std::cout << "timeOfDay.subseconds(): " << timeOfDay.subseconds() << std::endl; // (4) std::cout << "timeOfDay.to_duration(): " << timeOfDay.to_duration() << std::endl; // (5) std::cout << std::endl; std::cout << "date::hh_mm_ss(45700.5s): " << date::hh_mm_ss(45700.5s) << '\n'; // (6) std::cout << std::endl; std::cout << "date::is_am(5h): " << date::is_am(5h) << std::endl; // (7) std::cout << "date::is_am(15h): " << date::is_am(15h) << std::endl; // (7) std::cout << std::endl; std::cout << "date::make12(5h): " << date::make12(5h) << std::endl; // (7) std::cout << "date::make12(15h): " << date::make12(15h) << std::endl; // (7) } In line (1), I create a new instance of std::chrono::hh_mm_ss: timeOfDay. Thanks to the chrono literals from C++14, I just can add a few time durations to initialize a time of day object. With C++20, you can directly output timeOfDay (line 2). This is the reason I have to introduce the namespace date in line 3. The rest should be straightforward to read. Lines (4) display the components of the time since midnight in hours, minutes, seconds, and fractional seconds. Line (5) returns the time duration since midnight in seconds. Line (6) is more interesting: the given seconds correspond to the time displayed inline (2). Line (7) returns if the given hour is a.m. Line (8) finally returns the 12-hour equivalent of the given hour. timeOfDay timeOfDay Thanks to the date library, here is the output of the program: My next post presents the next component of the extended chrono library: calendar. 252 Yesterday 5796 Week 34683 Month 212004 All 9685636 Currently are 138 guests and no members online Kubik-Rubik Joomla! Extensions Read more... Read more...
https://modernescpp.com/index.php/calendar-and-time-zone-in-c-20
CC-MAIN-2022-27
refinedweb
1,009
57.87
Episode 51 · April 17, 2015 Learn how to add PDF receipts to your application so users can easily download receipts of their purchases Tax season just wrapped up and a lot of us have been building PDF receipts into our rails application I'm sure. The thing that I had to do this tax season was to set up receipts for GoRails and for onemonth.com. As I was building that, I noticed that I wanted to copy most of the code into GoRails but display a few different fields, so I started to build a copy of it, and then it made perfect sense to extract it into a gem and to open source it so other people could have access to this as well. We're going to talk about how to add receipts into your rails application today using the receipts gem that I made, and then part two of this episode is going to talk about how actually take some code from a rails app, extract it into a gem and then generalize it to an extent where your users can actually take it and build their own versions or implement this into their own rails applications. This is an example of one of the receipts that we can generate with this, it takes a custom logo image, it has the id number of the charge so we can look it up for support purpose, you can have a custom message saying: "Thanks", and then you can have a list of line items, these are important things for the charge itself is when it happened, the account was billed to the product they purchased, the amount and the credit card, and you probably have a bunch of other things like your European customers might want to add v80 information for tax purposes as well. This is an example of what you can do with this gem, and you can swap out any of these pieces with your own custom stuff, and it will generate a PDF that looks very similar but with your own information. Let's talk about building this into your own rails app. I've got two models in my rails application that I've scaffolded up, and one is users, and they have an email address and a name, and then the other model is a charge. The charge just keeps transaction logs, this is something that you would keep as a duplicate of your purchases in Stripe, so when you save a purchase in Stripe you keep a copy in your database, and it would keep track of the product, the user id, the amount in cents, so you want to save in cents for saving yourself a little bit of trouble with rounding issues, you would save the card type and the card's last four digits. You will basically have these two models, so you have a user model and then you'll have these charge, and the way that we set up this gem is to first go into our gemfile, add gem 'receipts' and then we'll hop into our terminal and run bundle to install the gems, and once that's done, we can restart our rails server. How do we get a receipt from a charge? The easiest way is to define: app/models/charge.rb class Charge < ActiveRecord::Base belongs_to :user validates :stripe_id, uniqueness: true def receipt Receipts::Receipt.new( id: id, product: "GoRails", company: { name: "One Month, Inc.", address: "37 Great Jones\nFloor 2\nNew York City, NY 10012", email: "[email protected]", logo: Rails.root.join("app/assets/images/one-month-dark.png") }, line_items: [ ["Date", created_at.to_s], ["Account Billed", "#{user.name} (#{user.email})"], ["Product", "GoRails"], ["Amount", "$#{amount / 100}.00"], ["Charged to", "#{card_type} (**** **** **** #{card_last4})"], ["Transaction ID", uuid] ], font: { bold: Rails.root.join('app/assets/fonts/tradegothic/TradeGothic-Bold.ttf'), normal: Rails.root.join('app/assets/fonts/tradegothic/TradeGothic.ttf'), } ) end end We can break this down a little bit and take a look. It starts out with the id, which is the database id of the charge. You might want to extract this and change it to a uuid of some sort, but this id is what will show up in the receipt at the top, and then you can pass in either the database id or generated one as well. That's up to you, but it's always a good rule of thumb to remind in the receipt that the product is important to save what they purchased, and then always have your company information as well and the logo so they can remember who it was from in a glance. The line items vary a little bit by company, but in our case, all we need is the created_at timestamp, which I already have an example for, we're going to give you the created_at date converted to a string, now you can format this better if you use strf time, you can take that and convert it to a format that you like which the best website for this, this one is phenomenal for taking a a datetime or a time in ruby and converting it to your desired result, and then having a great little reference here to test it all live. You can make modifications witht this and then copy this format into your rails app. That's a pro tip if you haven't seen that site before, but getting back to this, you can see that our items are very customizable, you can just remove one of these if you don't want, it's just an array of names and values. It's really simple, you can add as many of these as you want, and I kept it an array on purpose because these will be the order that it shows up, if we were to use a hash, we wouldn't necessarily keep the same order if we converted it to printing out. This is an important design choice of implementing this gem that we'll talk about in the next episode when I explain how we can build this from scratch. You can also specify custom fonts, we have custom fonts that we use right now, and we allow that to be in the PDF to keep them consistent with us. If you don't want to use them, just delete it and that's all you have to do. One interesting design choice I made on this configuration of your receipt was to make line items an array, I didn't want to require a date or account or a product or charge a card or an amount, I wanted this to be configurable, I wanted you to be able to add or remove these dynamically if you wanted, and the way that you could do that is by making a line items method here, and you could do your items equals this array of all these, and then maybe optionally here at the end, you say: Maybe we want additional information, like maybe our European customers need to add in that ids into the receipt, so we have this additional information field on the user that gets automatically printed out on all of the receipts. Then, you can take this and you could get all of these moved into your items and then you could return items for that and replace this array with a method call to line items, and we can render a dynamic array of line items on the receipt with the additional information as necessary. This gives you a lot of flexibility on how you design your receipt line items and you can make as many as you want, or you can stick with the default, maybe you don't need to support the additional information, so you can just hardcode it, but you also have this flexibility as well. This method generates a new receipt object, but it doesn't actually render it to a file or do anything with it, it just stores the information in memory so that prawn. can generate the receipt behind the scenes inside of the gem, but there's no real magic going on here. We've got everything connected, and we are ready to go and now we need to serve it up. That means that we're going to have to go into our charges controller and actually render this PDF out. This show action is actually where we need to be writing that code. Sometimes, when you view the show, you want the JSON response, sometimes you want the html response, but in this case, we want the PDF response. This is something that you might not have been aware of, you can do a response to a format, and you can have your format html, you can have your format .json, as you would normally expect, you can also have a format .PDF, which will detect the .pdf in the url and then will render out a response, it can be any type of response, in this case, we're going to follow the rules and return a pdf on that pdf format. We're going to pass this a block and use the send_data method. This is a method that allows us to send arbitrary data back, and we'll use the type option for this to set the content type so the browser knows what we're sending back and how to interpret the data. Here we're going to take the charge object that comes from the before_action for setting the charge, we're going to ask for the receipt, and then we're going to call the render method on that, and this render method is going to tell prawn. to take that object that we designed, and passed in our data and to actually make a PDF out of it, and that's going to return the PDF object back to us as a file type object. Now, we have to pass a couple other options here. One is filename: and this allows you to do anything that you want, so you can specify this file name and I'm actually going to paste in some code here, that will take the charge'screated_at` and format the time to a date, then we're going to have -gorails-receipt.pdf. This will be the file name that will send over, and the browser will read that and see that is the file name to save the file as. The other options we want is we want to set the type to *application/pdf, and this is important so that the browser knows how to interpret this data that we're sending over, it's arbitrary up until this point. So this is a very important option to pass in. The browser may or may not be able to interpret that or make a guess at it, but you're going to want to set the type to make sure it knows, and then one often useful feature here is the disposition setting that inline, and this will allow you to set your PDF to render inside of the browser, so it's a really useful thing to do, and I'm going to format this method call like so, and this will allow it to render inside of the browser. charges_controller.rb def show respond_to do |format| format.html format.json format.pdf { send_date(@charge.receipt.render, filename: "#{@charge.created_at.strftime("%Y-%m-%d")}-gorails-receipt.pdf", type: "application/pdf", disposition: :inline ) } end end The simplest way to access this is to test it out by going to the charge itself, and then you can do .json in the url to make sure that still works, and then you can also do .pdf. This will take a second to load because it's gotta generate the PDF, and then it is automatically set up for you in the browser. That's it, you already have a working PDF generation working. This is awesome. The last thing to check here the save file name, and as you can see, this is the file name we set up in the controller, it is that option that defines this suggested file name and it doesn't have to match the url as you might have noticed here. That's totally fine, so it's nice to maybe have the url match, but the browser has the ability to interpret these file name header and set that appropriately. There's also this attachment disposition that you could use, I originally set this up with the disposition inline which means that it renders in the browser and with the disposition attachment, it's the one that's going to force the browser to download it when you view the PDF. I prefer to do inline because it's so much more convenient than downloading these files and it seems a little overkill to force users to download these files all the time. Take a look at send_data on apidock.com if you want to learn anything more about it, it's really pretty simple, but it's useful for when you want to do things like send tarbals down or raw image or something like that, this is actually a great way of using rails to send files instead of just rendering html or text of some sort back. That's a quick introduction to the receipts gem, there's not a whole lot to it yet, but I definitely recommend you make a pull request if you have an idea for a feature, or if you just want to dive into it, wait till next gorails episode where I'm going to talk about how we extracted this from a rails application, turned it into a gem, made the API for this a little bit more generic, and then also built this using prawn. Prawn is the PDF generator that we use behind the scenes and we'll be talking about that as well in the future. hashes are ordered too (for the design choice) i think since ruby 1.9 the only real difference is that "keys" can be duplicated in the array style Ah yeah, you're right. I have always been careful about hashes when it comes to ordering because I wasn't sure it was a requirement for the ruby implementation. Is it possible to render the pdf without controller? maybe sending out via emails, or saving to s3 Yep, you can call the "render" method like you normally would and send it as an attachment to email or upload it to S3 Can you give me direction on using "endless scrolling in rails" instead of pagination. Thanks in advance. This is a pretty nice way to create PDF receipts . Have you tried using and... I haven't yet, but really want to. Obviously the HTML to PDF will be way easier than how Prawn does it. The programmatic generation is no fun at all. I love wkhtmltopdf. It has saved my ass on a bunch of projects in the past. It's become my goto for PDF generation in almost any language I use. Prawn is a pain in the butt, I've used it on two projects and it has been very painful, but you do get a lot of control. I will check it out! I did some research and can't remember why I picked prawn other than I saw enough other people using it and documentation seemed robust. That said, I bet wkhtmltopdf is way easier now that I've used Prawn more than a little. The hardest part was building our completion certificates at OneMonth.com in Prawn. It was is finicky and time consuming. Hey Chris, I'm sure you're aware of livecoding.tv, but I think it would be awesome for you to livestream the creation of gems like this so we can see the process. Plus, then we'd have the recorded streams to go back to and reference. There aren't very many Rails streams, so it would also be a great opportunity to gain some followers for GoRails! Hey, the gem it's pretty nice, I'll try with a project that i'm working, but i have a question first, Can I put footer and header to the PDF as Prawn gem?. It's basically designed to be simpler rather than customizable, but you can take the PDF code that's in the gem and adapt it easily to add your own header and footer. Take a look at this code:... You basically need to just copy paste the module and class into a file in your app (say config/initializers/receipts.rb) and then override the methods you want to change inside the class. You can also copy and paste the entire file there into your app if you want to modify all of it. Hey, Do you think it's possible to generate a new receipt everytime the customer get charged? I assume there's something to do with the transaction logs to automate this? TY and good job! ;) Yeah! So normally with subscriptions, I listen to the charge.created webhook and save a copy of it to the database as a Charge record. That's like the example I use. The reason for needing the webhook is that subscriptions charge the user monthly and they don't have to initiate anything in your app. For one-time charges, you can create a Charge object immediately during checkout and use that. And if you want to store the receipt PDF files instead of generating them dynamically each time, you can save the file it generates to S3 and save it using Shrine or Carrierwave, etc and just link to that file from your view. Login or create an account to join the conversation.
https://gorails.com/episodes/pdf-receipts
CC-MAIN-2018-43
refinedweb
3,008
70.47
This document introduces Deferreds, Twisted’s preferred mechanism for controlling the flow of asynchronous code. Don’t worry if you don’t know what that means yet – that’s why you are here! It is intended for newcomers to Twisted, and was written particularly to help people read and understand code that already uses Deferreds. This document assumes you have a good working knowledge of Python. It assumes no knowledge of Twisted. By the end of the document, you should understand what Deferreds are and how they can be used to coordinate asynchronous code. In particular, you should be able to: When you write Python code, one prevailing, deep, unassailled assumption is that a line of code within a block is only ever executed after the preceding line is finished. pod_bay_doors.open() pod.launch() The pod bay doors open, and only then does the pod launch. That’s wonderful. One-line-after-another is a built-in mechanism in the language for encoding the order of execution. It’s clear, terse, and unambiguous. Exceptions make things more complicated. If pod_bay_doors.open() raises an exception, then we cannot know with certainty that it completed, and so it would be wrong to proceed blithely to the next line. Thus, Python gives us try, except, finally, and else, which together model almost every conceivable way of handling a raised exception, and tend to work really well. Function application is the other way we encode order of execution: pprint(sorted(x.get_names())) First x.get_names() gets called, then sorted is called with its return value, and then pprint with whatever sorted returns. It can also be written as: names = x.get_names() sorted_names = sorted(names) pprint(sorted_names) Sometimes it leads us to encode the order when we don’t need to, as in this example: total = 0 for account in accounts: total += account.get_balance() print "Total balance $%s" % (total,) But that’s normally not such a big deal. All in all, things are pretty good, and all of the explanation above is laboring familiar and obvious points. One line comes after another and one thing happens after another, and both facts are inextricably tied. But what if we had to do it differently? What if we could no longer rely on the previous line of code being finished (whatever that means) before we started to interpret & execute the next line of code? What if pod_bay_doors.open() returned immediately, triggering something somewhere else that would eventually open the pod bay doors, recklessly sending the Python interpreter plunging into pod.launch() ? That is, what would we do if the order of execution did not match the order of lines of Python? If “returning” no longer meant “finishing”? Asynchronous operations? How would we prevent our pod from hurtling into the still-closed doors? How could we respond to a potential failure to open the doors at all? What if opening the doors gave us some crucial information that we needed in order to launch the pod? How would we get access to that information? And, crucially, since we are writing code, how can we write our code so that we can build other code on top of it? We would still need a way of saying “do this only when that has finished”. We would need a way of distinguishing between successful completion and interrupted processing, normally modeled with try, expect, else, and finally. We need a mechanism for getting return failures and exception information from the thing that just executed to the thing that needs to happen next. We need somehow to be able to operate on results that we don’t have yet. Instead of acting, we need to make and encode plans for how we would act if we could. Unless we hack the interpreter somehow, we would need to build this with the Python language constructs we are given: methods, functions, objects, and the like. Perhaps we want something that looks a little like this: placeholder = pod_bay_doors.open() placeholder.when_done(pod.launch) Twisted tackles this problem with Deferreds, a type of object designed to do one thing, and one thing only: encode an order of execution separately from the order of lines in Python source code. It doesn’t deal with threads, parallelism, signals, or subprocesses. It doesn’t know anything about an event loop, greenlets, or scheduling. All it knows about is what order to do things in. How does it know that? Because we explicitly tell it the order that we want. Thus, instead of writing: pod_bay_doors.open() pod.launch() We write: d = pod_bay_doors.open() d.addCallback(lambda ignored: pod.launch()) That introduced a dozen new concepts in a couple of lines of code, so let’s break it down. If you think you’ve got it, you might want to skip to the next section. Here, pod_bay_doors.open() is returning a Deferred, which we assign to d. We can think of d as a placeholder, representing the value that open() will eventually return when it finally gets around to finishing. To “do this next”, we add a callback to d. A callback is a function that will be called with whatever open() eventually returns. In this case, we don’t care, so we make a function with a single, ignored parameter that just calls pod.launch(). So, we’ve replaced the “order of lines is order of execution” with a deliberate, in-Python encoding of the order of execution, where d represents the particular flow and d.addCallback replaces “new line”. Of course, programs generally consist of more than two lines, and we still don’t know how to deal with failure. In what follows, we are going to take each way of expressing order of operations in normal Python (using lines of code and try/ except) and translate them into an equivalent code built with Deferred objects. This is going to be a bit painstaking, but if you want to really understand how to use Deferreds and maintain code that uses them, it is worth understanding each example below. Recall our example from earlier: pprint(sorted(x.get_names())) Also written as: names = x.get_names() sorted_names = sorted(names) pprint(sorted_names) What if neither get_names nor sorted can be relied on to finish before they return? That is, if both are asynchronous operations? Well, in Twisted-speak they would return Deferreds and so we would write: d = x.get_names() d.addCallback(sorted) d.addCallback(pprint) Eventually, sorted will get called with whatever get_names finally delivers. When sorted finishes, pprint will be called with whatever it delivers. We could also write this as: x.get_names().addCallback(sorted).addCallback(pprint) Since d.addCallback returns d. We often want to write code equivalent to this: try: x.get_names() except Exception, e: report_error(e) How would we write this with Deferreds? d = x.get_names() d.addErrback(report_error) errback is the Twisted name for a callback that is called when an error is received. This glosses over an important detail. Instead of getting the exception object e, report_error would get a Failure object, which has all of the useful information that e does, but is optimized for use with Deferreds. We’ll dig into that a bit later, after we’ve dealt with all of the other combinations of exceptions. What if we want to do something after our try block if it actually worked? Abandoning our contrived examples and reaching for generic variable names, we get: try: y = f() except Exception, e: g(e) else: h(y) Well, we’d write it like this with Deferreds: d = f() d.addCallbacks(h, g) Where addCallbacks means “add a callback and an errback at the same time”. h is the callback, g is the errback. Now that we have addCallbacks along with addErrback and addCallback, we can match any possible combination of try, except, else, and finally by varying the order in which we call them. Explaining exactly how it works is tricky (although the Deferred reference does rather a good job), but once we’re through all of the examples it ought to be clearer. What if we want to do something after our try/ except block, regardless of whether or not there was an exception? That is, what if we wanted to do the equivalent of this generic code: try: y = f() except Exception, e: y = g(e) h(y) d = f() d.addErrback(g) d.addCallback(h) Because addErrback returns d, we can chain the calls like so: f().addErrback(g).addCallback(h) The order of addErrback and addCallback matters. In the next section, we can see what would happen when we swap them around. What if we want to wrap up a multi-step operation in one exception handler? try: y = f() z = h(y) except Exception, e: g(e) With Deferreds, it would look like this: d = f() d.addCallback(h) d.addErrback(g) Or, more succinctly: d = f().addCallback(h).addErrback(g) What about finally? How do we do something regardless of whether or not there was an exception? How do we translate this: try: y = f() finally: g() Well, roughly we do this: d = f() d.addBoth(g) This adds g as both the callback and the errback. It is equivalent to: d.addCallbacks(g, g) Why “roughly”? Because if f raises, g will be passed a Failure object representing the exception. Otherwise, g will be passed the asynchronous equivalent of the return value of f() (i.e. y). Twisted features a decorator named inlineCallbacks which allows you to work with Deferreds without writing callback functions. This is done by writing your code as generators, which yield Deferreds instead of attaching callbacks. Consider the following function written in the traditional Deferred style: def getUsers(): d = makeRequest("GET", "/users") d.addCallback(json.loads) return d using inlineCallbacks, we can write this as: from twisted.internet.defer import inlineCallbacks, returnValue @inlineCallbacks def getUsers(self): responseBody = yield makeRequest("GET", "/users") returnValue(json.loads(responseBody)) a couple of things are happening here: addCallbackon the Deferredreturned by makeRequest, we yield it. This causes Twisted to return the Deferred‘s result to us. returnValueto propagate the final result of our function. Because this function is a generator, we cannot use the return statement; that would be a syntax error. Note New in version 15.0. On Python 3.3 and above, instead of writing returnValue(json.loads(responseBody)) you can instead write return json.loads(responseBody). This can be a significant readability advantage, but unfortunately if you need compatibility with Python 2, this isn’t an option. Both versions of getUsers present exactly the same API to their callers: both return a Deferred that fires with the parsed JSON body of the request. Though the inlineCallbacks version looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the Deferred being yielded to fire. inlineCallbacks become even more powerful when dealing with complex control flow and error handling. For example, what if makeRequest fails due to a connection error? For the sake of this example, let’s say we want to log the exception and return an empty list. def getUsers(): d = makeRequest("GET", "/users") def connectionError(failure): failure.trap(ConnectionError) log.failure("makeRequest failed due to connection error", failure) return [] d.addCallbacks(json.loads, connectionError) return d With inlineCallbacks, we can rewrite this as: @inlineCallbacks def getUsers(self): try: responseBody = yield makeRequest("GET", "/users") except ConnectionError: log.failure("makeRequest failed due to connection error") returnValue([]) returnValue(json.loads(responseBody)) Our exception handling is simplified because we can use Python’s familiar try / except syntax for handling ConnectionErrors. You have been introduced to asynchronous code and have seen how to use Deferreds to: inlineCallbacks These are very basic uses of Deferred. For detailed information about how they work, how to combine multiple Deferreds, and how to write code that mixes synchronous and asynchronous APIs, see the Deferred reference. Alternatively, read about how to write functions that generate Deferreds.
http://twistedmatrix.com/documents/current/core/howto/defer-intro.html
CC-MAIN-2015-18
refinedweb
2,004
56.35
Dimitri Papadopoulos 2007-10-09 Hi, If at all possible, I'm considering patching rpc.rquotad to support IBM's GPFS filesystem: GPFS has its own set of commands such as mmedquota, mmlsquota, and mmrepquota but lacks a port of the rpc.quotad service. I believe it shouldn't be too difficult since: Implemented as the Virtual File System (VFS), GPFS supports the UNIX file interface so that applications need not be changed to take advantage of the new file system. as seen here: Any pointers on where I should start from? Should I implement a quotaio_gpfs.c file? If you implement quotaio_gpfs.c, then quota tools would be able to operate on quota files in gpfs format. To have rpc.rquotad support, you actually need something different. Userspace communicates with kernel about quota using quotactl interface and rpc.rquotad uses this interface (namely GETQUOTA and SETQUOTA calls). So you need to implement hooks for this interface for your filesystem... The sources for quotactl interface in kernel are in linux/fs/quota.c. Dimitri Papadopoulos 2007-11-04 Thank you for your answers. In the meantime, I had a deeper look at this issue. Here is my current understanding of the situation. I would be grateful if you could help by answering a few questions, time permitting: 1) I wrote a proof-of-concept rquotad program for GPFS, based on the BSD source code and directly calling the GPFS API instead of the kernel API. It's available for download from an IBM forum, in this thread: This helped me understand how it all works and already provides us with a drop-in replacement for the Linux rpc.rquotad, supporting only GPFS file systems. It's enough for my current needs, but of course I'd like to give back to the community something that could be integrated into existing code - and hopefully maintained with that code ;-) 2) I may be wrong, but I think GPFS does not make quota information available through the kernel API. It's not just a question of format, I think the information is simply not there. This information is made available through the GPFS user-space API: What would be the easiest way to check that GPFS file-systems indeed do not expose quota information through the usual kernel API? 3) If indeed GPFS does not expose quota information through the usual kernel API, I could ask IBM to fix this situation and make it available. Maybe it's not done because of licensing issues (GPFS is not GPL'ed). I really don't know, hopefully I can get some feedback from IBM support on this one... Let's call this solution A. Otherwise, I could stick to my current code, with two different code paths, one for GPFS and using the GPFS API, another for all other file-systems and using the kernel API. Let's call this solution B. Solution A would be preferable over solution B, wouldn't it? 4) Concerning solution B: From a technical point-of-view, would you accept patches that implement solution B? Again, that would mean two different code paths, one using the GPFS API, the other one using the kernel API. 5) Concerning solution B: From a licensing point-of-view, there's an incompatibility. Linux DiskQuota is GPL'ed as far as I can see. The GPFS library libgpfs.so containing gpfs_quotactl() is provided under a closed-source commercial license. Either Linux DiskQuota or IBM would have to change their license to be able to integrate the code. On the Linux DiskQuota side: Would a GPL exception for GPFS be acceptable? On the IBM side: I could ask IBM to release the gpfs_quotactl() part of libgpfs.so under GPL. They have already done that for Samba in the past, exposing parts of the GPFS API in the GPL'ed library libgpfs_gpl.so and its <gpfs_gpl.h> header: Would you accept patches for GPFS support under solution B, if IBM re-licensed the relevant parts of their code under GPL? If the answer is yes, I'll start pestering IBM about this... Dimitri Papadopoulos 2007-11-05 Here is how I test whether quota information is made available through the kernel API: $ cat test-quotactl.c #include <sys/quota.h> #include <stdio.h> #include <string.h> #include <errno.h> int main() { const char special[] = "/neurospin/home/papadopoulos"; int id = 16044; struct dqblk addr; int cmd = QCMD(Q_GETQUOTA, USRQUOTA); if (quotactl(cmd, special, id, (caddr_t)&addr) < 0) { fprintf(stderr, "quotactl() failed: %s (%d)\n", strerror(errno), errno); } } $ $ cc -o test-quotactl test-quotactl.c $ $ ./test-quotactl quotactl() failed: Block device required (15) $ $ mmlsquota -u 16044 /neurospin/home/papadopoulos mmlsquota: The device name /neurospin/home/papadopoulos starts with a slash, but not /dev/. Block Limits | File Limits Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks home USR 2023216 5242880 6291456 10192 none | 31813 0 0 51 none $ As you can see, it looks like quotactl() cannot access quota information for this path. The GPFS command can. Is there any other way to retrieve quota information from the kernel? Dimitri Papadopoulos 2007-11-05 Ooops... In the test code posted above, please change from: const char special[] = "/neurospin/home/papadopoulos"; to: const char special[] = "/dev/home"; The resulting error is: $ ./test-quotactl quotactl() failed: No such device (19) The device associated to /neurospin/home is /dev/home: $ df -k /neurospin/home/papadopoulos Filesystem 1K-blocks Used Available Use% Mounted on /dev/home 2339553280 191806976 2147746304 9% /neurospin/home $ Hello, First I'd like to thank you for your work and care for the community :). Now to your questions: As far as I can say, GPFS does not provide its quota information over the standard quotactl interface. Part of the reason is that they need to pass to userspace two additional values - the uncertainty in block and inode accounting. Definitely the most preferable solution would be to get together all the cluster filesystem people - i.e. at least also GFS2, OCFS2 and ask what quota information would they like to be passed to user space and then extend current kernel interface so that it is able to support it (and we could accomodate XFS at the same time - currently it has special syscall because of similar reasons). Now for designing the interface, one basically doesn't need to know more than the structure they pass which can hardly be a part of some license issues... So this would be a way to go which would be beneficial to all the involved parties. Given that GPFS is not in mainline kernel and until they change the licence to GPL it cannot be there (BTW: some people find kernel modules with proprietary license illegal due to GPL), I'm reluctant to do any substantial changes to quota-tools to ease them their life with proprietary kernel module... So this basically rules out your plan B (sorry, I hate to make life harder for users because of proprietary modules but this is the only leverage community has against them - besides suing which I don't think would do any good to anybody). The plan A has the disadvantage that it will take some time for this to be done, even more because currently I have more urgent assignments than improving quota kernel interface... I've added this to my todo list I'll see when I can have a look at it :) Honza
http://sourceforge.net/p/linuxquota/discussion/57476/thread/38bb0bfe/
CC-MAIN-2015-40
refinedweb
1,246
62.17
This content has been marked as final. Show 30 replies - 16. Re: force the repaint of a JPanel843853 Mar 25, 2006 6:12 PM (in response to 843853)Nice. You're my hero. That one works, nothing else I found worked. Awesome. Props. You have my gratitude. A+. I'd offer you a job, but I don't own a company. 17. Re: force the repaint of a JPanel843853 Mar 25, 2006 6:13 PM (in response to 843853)Er.... It didn't quote. I was replying to: Call revalidate(); 18. Re: force the repaint of a JPanel843853 May 11, 2006 8:25 PM (in response to 843853)I was having a similar problem with a JPanel where I needed to dynamically add or take away labels. The only way I could get it to repaint was to resize the window or do some action which caused a JScrollPane in another panel to turn on or off by adding and removing text. I tried every suggestion mentioned here, revalidate() did the trick. Thanks. 19. Re: force the repaint of a JPanel843853 May 19, 2007 1:51 PM (in response to 843853)Hello, i'm facing quite the same problem HighlightCell is called from an actionButton click. From there i would like to repaint the cell in XORMode. But calling repaint() does not do the trick neither update revalidate or whatever... public void HighlightCell(boolean highlight) { m_highlight=highlight; //this.updateUI(); Graphics g = getGraphics(); if (g != null) paintComponent(g); else repaint(); update(getGraphics()); revalidate(); } public void paintComponent(Graphics grp) { super.paintComponent(grp); Graphics2D g2d = (Graphics2D) grp; // draw the image using the AffineTransform if (m_highlight) g2d.setXORMode(Color.BLUE); else g2d.setPaintMode(); g2d.drawImage(m_img, m_transformer, this); } But if i click on maximise or iconify the chage operates... What should i do ? Thanks for your help. 20. Re: force the repaint of a JPanel843853 May 21, 2007 12:56 AM (in response to 843853)so start your own thread 21. Re: force the repaint of a JPanel843853 Nov 9, 2007 4:02 AM (in response to 843853)Literally - the solution is to instantiate a new Thread, put the processing that you need to do in the run method, and set it running when you would have otherwise started the processing that you need to do. This is the best solution I have found. This frees up the GUI to resume the repaint in the background while your processing continues. 22. Re: force the repaint of a JPanel843853 Nov 29, 2007 5:11 AM (in response to 843853)I had the same problem, and found an example of how to fix it (by putting the animation code in another string) at, by a professor named Dick Baldwin (His tutorials can be found at, and are great resources) I took a class from Prof. Baldwin, but I have no commercial association with him nor commercial interest in his tutorials (So I hope that this posting does not violate the code of conduct) 23. Re: force the repaint of a JPanel843853 Jul 4, 2008 12:27 AM (in response to 843853)this has solved same problem on my project too! thanx bxpeng. 24. Re: force the repaint of a JPanel843853 Oct 14, 2008 8:30 PM (in response to 843853)This problem actually, from my experience, calls for using 2 Threads that will switch strictly. In the first thread, you call a repaint every time you have data ready for displaing. In the other thread, you do any computations necessary. Also, you do the sleeping in this thread. I think that you can learn best from an example, so I give you some animation code that should work 100% (I tested it before posting). //contents of JavaDraw.java: package javadraw; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Toolkit; import javax.swing.JWindow; public class AnimationWindow extends JWindow{ // this variable is used for strict switching between the threads public static boolean computing = false; // this is how fast the animation will be (one change every 10 ms) public static int delay = 10; // gray is iterated every computation cycle static int gray = 0; public static void main(String[] args) { //some initialization stuff AnimationWindow w = new AnimationWindow(); w.setSize(100,100); centerOnScreen(w); w.setVisible(true); // the threads switch strictly - first the ComputeThread, then AnimationWindow // and again the ComputeThread ... new ComputeThread(w); while(true){ w.draw(); } } // just to have the animation in the center of the screen; // this method is not necessary for the correct functionality of the animation private static void centerOnScreen(JWindow w) { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); w.setLocation((int)(screenSize.getWidth()-w.getWidth())/2, (int)(screenSize.getHeight()-w.getHeight())/2); } // This method belongs to the AnimationWindow object. // It is synchronized - i.e. no other user thread can call a method on the same // object while this method is being executed. // It must be synchronized in order for the Thread switching to work correctly. synchronized public void compute(){ while(!AnimationWindow.computing){ try { wait(); } catch (InterruptedException ex) { ex.printStackTrace(); } } try { // HERE you set the first thread to sleep, while the other one can draw freely. // Note that if you set the delay to something close to (or) zero, the animation // won't be smooth anymore, because JVM won't have enough time for a redraw. Thread.sleep(AnimationWindow.delay); } catch (InterruptedException ex) { ex.printStackTrace(); } //compute something... (this part is, of course, not necessary) for(int i = 0; i<10000;i++){ for(int j = 0;j<10000;j++) ; //some CPU-intensive computation } //set the next color to draw if(gray+1 >= 256) System.exit(0); gray++; AnimationWindow.computing = false; notifyAll(); } // method whose only purpose is to call repaint() every time new data is computed synchronized public void draw(){ while(AnimationWindow.computing){ try { wait(); } catch (InterruptedException ex) { ex.printStackTrace(); } } repaint(); AnimationWindow.computing = true; notifyAll(); } // the paint method that is scheduled for calling after you call a repaint(); @Override public void paint(Graphics g){ g.setColor(new Color(gray,gray,gray)); g.fillRect(0,0,100,100); } } // The thread that calls any CPU-intensive computations // It also prepares data for drawing. class ComputeThread extends Thread{ AnimationWindow syncObject; public ComputeThread(AnimationWindow w){ syncObject = w; start(); //start() dispatches the ComputeThread and calls its run() method. } @Override public void run(){ while(true){ syncObject.compute(); } } } 25. Re: force the repaint of a JPanel843853 Oct 14, 2008 10:42 PM (in response to 843853)Your approach to animation is slightly flawed. What I advise is a game loop or some sort of program timer that schedules repaints every so many ms. 1) Create a timer object and use the method scheduleAtFixedRate. 2) Create a thread that calls an animation update and the repaint (same as #1 but explicit) Good luck! :) 26. Re: force the repaint of a JPanel843853 Aug 23, 2010 5:30 PM (in response to 843853)Use something like component.paintImmediately(0, 0, component.getWidth(), component.getHeight()) Maybe the JVM has changed since the earlier posts, but repaint() does NOT cause the component to be repainted as soon as possible, contrary to the JavaDoc. It may actually allow a time delay to wait for similar events, in order to improve overall efficiency. For example, if you have a regular event with a repaint() every 500 or 100 ms, the RepaintManager (?) may repaint each time at first, but then it adapts to this regular pattern and waits for all the calls to finish before painting anything. The source of this information is actual experimentation. See also the Note under "Paint Processing" at 27. Re: force the repaint of a JPanel843853 Aug 23, 2010 6:41 PM (in response to 843853)Actually, you have at least a couple problems that I can see in just a quick glance. 1 - paintComponent (is used in SWING) 2 - loading with Toolkit has never been advocated by anyone for anything other than tiny images. If you're image is bigger than a thumbnail, then you're not getting it loaded. Use ImageIO to load it. I have literally, have 1000's of object running full screen in SWNG using repaint() and not have a problem with the graphics refreshing, and lest you think it was on a mega box, it was on anything from a 1.8GHz laptop to a dual quadcore. If you do it right, it will work right. 28. Re: force the repaint of a JPanelmorgalr Dec 16, 2010 9:05 PM (in response to 843853)If you absolutely have to have paints done exactly when you decide, then get rid of your passive approach and use active rendering. 29. Re: force the repaint of a JPanelgimbal2 Dec 28, 2010 9:14 PM (in response to morgalr)edit: nvm, this thread is too old.
https://community.oracle.com/thread/1663771?start=15&tstart=15
CC-MAIN-2017-22
refinedweb
1,455
64.71
On Wed, Mar 31, 2010 at 12:24 PM, David Leimbach <leimy2k at gmail.com> wrote: > > > On Wed, Mar 31, 2010 at 12:02 PM, Gregory Collins <greg at gregorycollins.net > > wrote: > >> David Leimbach <leimy2k at gmail.com> writes: >> >> >...) >> >> See IterGV from the iteratee lib: >> >> >> >> >> The second argument to the "Done" constructor is for the portion of the >> input that you didn't use. If you use the Monad instance, the unused >> input is passed on (transparently) to the next iteratee in the chain. > > >> If you use attoparsec-iteratee >> ( >> >> ), >> you could write "expect" as an attoparsec parser: >> > >> ------------------------------------------------------------------------ >> {-# LANGUAGE OverloadedStrings #-} >> >> import Control.Applicative >> import Control.Monad.Trans (lift) >> import Data.Attoparsec hiding (Done) >> import Data.Attoparsec.Iteratee >> import qualified Data.ByteString as S >> import Data.ByteString (ByteString) >> import Data.Iteratee >> import Data.Iteratee.IO.Fd >> import Data.Iteratee.WrappedByteString >> import Data.Word (Word8) >> import System.IO >> import System.Posix.IO >> >> expect :: (Monad m) => ByteString >> -> IterateeG WrappedByteString Word8 m () >> expect s = parserToIteratee (p >> return ()) >> where >> p = string s <|> (anyWord8 >> p) >> >> >> dialog :: (Monad m) => >> IterateeG WrappedByteString Word8 m a -- ^ output end >> -> IterateeG WrappedByteString Word8 m () >> dialog outIter = do >> expect "login:" >> respond "foo\n" >> expect "password:" >> respond "bar\n" >> return () >> >> where >> respond s = do >> _ <- lift $ enumPure1Chunk (WrapBS s) outIter >>= run >> return () >> >> >> main :: IO () >> main = do >> hSetBuffering stdin NoBuffering >> hSetBuffering stdout NoBuffering >> enumFd stdInput (dialog output) >>= run >> where >> output = IterateeG $ \chunk -> >> case chunk of >> (EOF _) -> return $ Done () chunk >> (Chunk (WrapBS s)) -> S.putStr s >> >> hFlush stdout >> >> return (Cont output Nothing) >> ------------------------------------------------------------------------ >> >> Usage example: >> >> $ awk 'BEGIN { print "login:"; fflush(); system("sleep 2"); \ >> print "password:"; fflush(); }' | runhaskell Expect.hs >> foo >> bar >> >> N.B. for some reason "enumHandle" doesn't work here w.r.t buffering, had >> to go to POSIX i/o to get the proper buffering behaviour. >> >> That's pretty neat actually. I'm going to have to incorporate timeouts > into something like that (and attoparsec-iteratee doesn't install for me for > some reason, I'll try again today). > worked fine today... > > That leads me to another question in another thread I'm about to start. > And that other thread is not going to happen, because I realized I was just having issues with non-strict vs strict evaluation :-) It makes perfect sense now... gist is: timeout (10 ^ 6) $ return $ sum [1..] and timeout (10 ^ 6) $! return $ sum [1..] will not timeout, and will hang while timeout (10 ^ 6) $ return $! sum [1..] does timeout... and everything in the Haskell universe is nice and consistent. Dave > > Dave > > > >> G >> -- >> Gregory Collins <greg at gregorycollins.net> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell-cafe/2010-March/075526.html
CC-MAIN-2014-15
refinedweb
427
60.01
Table of Contents Introduction This section contains information about the most central concept of an EPiServer CMS project, namely that of pages, properties, page types and page templates. It works in the following way: - A page type defines a set of properties. - A page is an instance of the .NET class that defined the page type. - When creating a page the editor assigns values to the properties defined by the page’s page type. - When a page is requested by a visitor, the page template associated with the page’s page type is used to generate output. Page Type During initialization EPiServer CMS will scan all binaries in the bin folder for .NET classes that inherits PageData as example below. For each of the found classes a page type is created and for all public properties on the .NET class a corresponding property on the page type will be created. [ContentType] public class NewsPageModel : PageData { public virtual XhtmlString MainBody { get; set; } } Refer to the Blocks Block Types and Block Templates for a description of blocks which can be used as a reused building block when creating pages. It is also possible to create page types without an corresponding .NET class. In addition to the list of properties added by the developer all page types implicitly contains a set of built-in properties. These built-in properties are always available for all pages regardless of which page type they are instances of. For a full list of built-in properties please see the documentation for the PageData.Property property. Page Template A page template is used to generate output for pages of a give type, often as an .aspx type of file. Page templates in EPiServer CMS can be created using web forms or ASP.NET MVC. A page template can be used for more than one page type but a one-to-one connection is the most common approach. When creating a page template with web forms or MVC, you can use any combination of markup, server controls, code behind logic and more. Below is an example of a template for a page. public class NewsPage : TemplatePage<NewsPageModel> { protected override void OnLoad(System.EventArgs e) { string mainBody = CurrentPage.MainBody.ToHtmlString(); } } A central function of any page template is to access the property values of the page that was requested by the client so that they can be integrated into the output, so how do you accomplish that? This is where the base classes and User Controls come into play. In the following you will find an example based on a web form page template. Within the EPiServer CMS API there is a chain of base classes that your web form can inherit from, but TemplatePage<T> is the most common selection. The corresponding base class for User Controls is UserControlBase<T> and for MVC controllers it is EPiServer.Web.Mvc.PageControllerBase<T>. There are also untyped versions PageBase, TemplatePage, UserControlBase and PageControllerBase available if you do not define your pages in code by .NET classes. By making your own web forms and User Controls inherit from the appropriate base classes within the EPiServer CMS API you gain access to a set of methods and properties that integrate your web form with EPiServer CMS. We will not discuss these methods and properties in detail in this article, but we will highlight one of them; the CurrentPage property. Access to the CurrentPage property is probably the most important benefit you get from inheriting from TemplatePage<T> (or TemplatePage for untyped pages). You will find yourself accessing this property many times when you are writing web forms and User Controls for an EPiServer CMS project. So what is it? The type of the CurrentPage property is an instance of your .NET class that inherits EPiServer.Core.PageData (or PageData for untyped pages). A PageData object is the programmatical representation of a page in EPiServer, it contains the properties defined in your .NET class but also access rights etc.. The value of CurrentPage is automatically set to the PageData object that is reqeusted by the client. This means that you do not have to find out what page you should be fetching property values from yourself, all you need to do is consume the property values from the CurrentPage object. Page The page is where the page type and the page template comes together. The editor creates the page and sets the values for the properties defined by the page’s page type. When the page is requested by a visitor the page template will be used to create the output to be sent to the client. See Also - Blocks Block Types and Block Templates provides a description of blocks, block types and block templates. - Creating Page Templates section provides a description of how to create page templates using web forms or MVC.
https://world.episerver.com/documentation/Items/Developers-Guide/Episerver-CMS/7/Content/Pages-and-Blocks/Pages-Page-Types-and-Page-Templates/
CC-MAIN-2018-22
refinedweb
809
62.58
Hi guys! I hope you can help me with my problem. How can i randomize two questions in bgt? Here is my code: void main() { string input; input=input_box("input_box", "what is bgt?", ""); if(input=="a programming language") { alert("yes!", "The answer was indeed a programming language!"); } else { alert("Sorry!", "The answer is a programming language, not "+input+"!"); } string input1; input1=input_box("input_box", "what does tts mean?", ""); if(input1=="text to speech") { alert("yes!", "The answer was indeed text to speech!"); } else { alert("Sorry!", "The answer is text to speech, not "+input1+"!"); } } Thanks for your help. Best regards. Hi guys! If I understand correctly, you probably want two arrays: one with the questions, and the other with the answers. You'd have answers[0] be the answer to questions[0], answers[1] for questions[1], etc. Then you'd create a random int to decide which question to show. Ex, int index = random(0, questions.length()-1); Then your input_box would take questions[index] where you'd normally put the answer. if(answers[index]==input) alert("Correct!", "You\'re right!"); Hth @CAE_Jones Thank you. Can you explain what I have to do step by step? What is a good programming language to write vocabularies with the random function? Something like this (replace the questions and answers as you wish): void main() { int size=3; // Or however many questions you want. string[] questions(size); string[] answers(size); questions[0] = "Say hello."; answers[0] = "hello"; questions[1] = "What color is the sky?"; answers[1] = "blue"; questions[2] = "How many licks does it take to get to the Tootsy Roll center of a Tootsy Pop?"; answers[2] = "3 if you're an owl"; // now, pick one: int index = random(0, size-1); string text = input_box(questions[index], "Your answer:"); // Just to be safe, let's make sure the answer and input are the same case: text = string_to_lower_case(text); if(text == answers[index]) { alert("Correct!", "You gave the correct response!"); } else { alert("Incorrect!", "That is not the right answer!"); } } @CAE_Jones: Have you tried out the program? The program randomizes nothing at all. When I put this line into my code: int index = random(1, size-1); the program only presents the first question. I wanted the program to randomize questions[0] questions[1] and questions[2]. 6 2017-10-23 20:53:22 (edited by CAE_Jones 2017-10-23 21:08:30) You want random (0, size-1), and size must be greater than 0. I typed the example on my phone, so haven't tested it yet. reading it again, nothing jumps out at me as likely to break it. Have you tried it exactly as written? If it compiles, how many times did you run it without any change? There might be some number typo somewhere, but I still haven't found it. [edit] Yeah, I tested it, and it works perfectly on my computer. It took running it 5 times to get all 3 questions. Remember that for the random function, the first value must be smaller than the second value. Also, arrays are 0-based.[/edit] @CAE_Jones: Thank you. Can bgt only play *.wav files or can it also play *.ogg files? I wanted bgt to randomize all three questions, so when I answer one question, he program proceeds with the second question and so on, until all three questions are answered correctly. Yes, BGT can play .ogg files. If you want to do a non-repeating random sequence, that's a bit more complicated. You could accomplish this in several different ways. For example, you could have a third array, this one of bools, and do something like this: bool[] shown(size); // Don't assume it will initialize to all false: for (uint i=0; i<size; i++) { shown[i]=false; } string text="−"; while (text != "") // allow the user a way to quit early. { int index=random(0, size-1); while (shown[index]) { // check to see if we're done: bool done=true; for (uint i=0; i<size; i++) { if(!shown[i]) done=false; } if (done) index=-1; else index=random(0, size-1); if (done) break; } if (index<0) break; text = input_box(question[index], "Your answer:"); if (text=="" or text=="quit") break; // Ignore case: text=string_too_lower_case(text); if (answers[index]==text) { shown[index] = true; alert("Correct!", "That is the correct response!"); } else { alert("Incorrect!", "That is not correct!"); } } alert ("That\'s all!", "Thanks for playing!"); @CAE_Jones: Thank you for the code. *.ogg files aren't working for me. Only *.wav files work. How is it possible to implement nvda in bgt? When I type: #include "license.txt" #include "nvdaControllerClient32.dll" bgt sends an error. How can I fix this? You don't need to include license.txt, just the nvda controller client dll. I'm not sure what's up with the ogg issue. Can you paste an example that doesn't work? @CAE_Jones: I tried playing *.ogg files with bgt and it worked. Can bgt only play *.ogg files? I tried including nvda and it gave me an compilation error: File: C:\Program Files (x86)\BGT\include\nvdaControllerClient32.dll On line: 1 (3) Line: MZÿÿ¸@غ´ Í!¸LÍ!This program cannot be run in DOS mode. Error: Expected identifier File: C:\Program Files (x86)\BGT\include\nvdaControllerClient32.dll On line: 1 (3) Line: MZÿÿ¸@غ´ Í!¸LÍ!This program cannot be run in DOS mode. Error: Instead found '<unrecognized token>' What am I doing wrong? You don't need to include the dll, just use the screen_readerlesset_library_path funtion (I forget what it's called, but it's in the manual near the screen_reader_speak function). BGT plays .wav and .ogg. You would need external libraries to play other filetypes. @CAE_Jones: When I try playing wav files in bgt, bgt plays no sound at all. When I include a *.ogg file, it plays the sound. BGT does have issues with wav files of certain bitdepths. Specifically, I think it only plays wavs with bitdepths that are powers of 2 (8, 16, 32), but not 24 or 36, etc. Maybe try playing c:\\windows\\media\\ding.wav, since that one always works. That file got kinda quiet post-xp, so if you want something louder, chord.wav is also reliable. If you can't get it to play ding or chord, then there is a very strange problem to be solved. @CAE_Jones: ding.wav and chord.wav are working fine. But other *.wav files aren't working. Is it better to use *.ogg files instead? Hello. To solve this problem, you need to do the bit dept conversion on some audio editor that supports this feature, as in Soundforge itself. convert the bit dept from the file to 16 bits, which is the most used currently and bgt will play them. this is quite common with sound files from professional libraries, in which all files were written to 24 bit wav. Hello. Thank you for your help.
http://forum.audiogames.net/viewtopic.php?id=23325
CC-MAIN-2017-47
refinedweb
1,157
77.84
Investors in CRISPR Therapeutics AG (Symbol: CRSP) saw new options become available today, for the July 31st expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the CRSP options chain for the new July 31st contracts and identified one put and one call contract of particular interest. The put contract at the $60.50 strike price has a current bid of $4.70. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $60.50, but will also collect the premium, putting the cost basis of the shares at $55.80 (before broker commissions). To an investor already interested in purchasing shares of CRSP, that could represent an attractive alternative to paying $61.21/share today. Because the $60.77% return on the cash commitment, or 56.71% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for CRISPR Therapeutics AG, and highlighting in green where the $60.50 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $63.50 strike price has a current bid of $4.00. If an investor was to purchase shares of CRSP stock at the current price level of $61.21/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $63.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.28% if the stock gets called away at the July 31 $63.50 strike highlighted in red: Considering the fact that the $63.53% boost of extra return to the investor, or 47.70% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example above is 89%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $61.21).
https://www.breathinglabs.com/monitoring-feed/genetics/july-31st-options-now-available-for-crispr-therapeutics-crsp/
CC-MAIN-2020-40
refinedweb
347
65.73
On Sat, 03 Jan 2009 14:53:12 -0800, Bryan Olson wrote about testing whether or not an index is in a slice: > I'm leaning towards mmanns' idea that this should be built in. What's the use-case for it? > Handling > all the cases is remarkably tricky. Here's a verbose version, with a > little test: [snip code] Here's a less verbose version which passes your test cases: def inslice(index, slc, len): """Return True if index would be part of slice slc of a sequence of length len, otherwise return False. """ start, stop, stride = slc.indices(len) if stride < 0: return (start >= index > stop) and ((start-index) % -stride == 0) else: return (start <= index < stop) and ((index-start) % stride == 0) (Hint: help(slice) is your friend.) -- Steven
http://mail.python.org/pipermail/python-list/2009-January/518361.html
CC-MAIN-2013-20
refinedweb
130
77.06
Currently, when asked to sum a tensor, the library naively adds the numbers from left-to-right but this approach can lead to loss of precision in certain cases (example shown below). To prevent this loss of precision, I would like to propose the implementation of summation methods like Kahan or pairwise summation. #include <iostream> #include <Eigen/Dense> int main() { using std::cout; Eigen::Matrix2f mat1, mat2; mat1 << 100000, 0.001, -100000, 0.001; cout << "sum1: " << mat1.sum() << '\n'; // expected: 0.002, output: 0 mat2 << 0.001, 0.001, -1, 1; cout << "sum2: " << mat2.sum() << '\n'; // expected: 0.002, output: 0.002 } Created attachment 791 [details] A proposed patch [Work in progress] I tried to resolve the issue by implementing a slightly modified version of the Kahan summation algorithm (). The attached patch basically removes the inbuilt SumReducer and replaces it with the new sum reduce in order to check if it makes a difference. All the unit tests which pass on the latest bitbucket version also pass after this patch but the problem still remains. Maybe I did not make changes at the appropriate place or I am not testing properly. Since this is my first contribution to eigen, any hints and suggestions are appreciated. Your path modifies the Tensor module only (so Eigen::Tensor objects only), whereas your test is checking Eigen::Matrix. @gael Thanks for your reply. It would be very helpful if you could you point me to the files I need to modify in order to change the behaviour globally. @Lakshay: In case you did not find it meanwhile: The sum (and other) reductions for matrices is implemented in Eigen/src/Core/Redux.h. Namely, you need to specialize/re-implement `redux_impl`, or you need to implement an alternative to `scalar_sum_op` (which is implemented in Eigen/src/Core/functors/BinaryFunctors.h). However overall, I doubt that this is really worth the effort, except in special cases (and it is easy to construct cases, where even Kahan summation fails, e.g., some permutation of [1e20, 1, 1e-20, 1e-20, -1, -1e20]). Thanks for the feedback Christopher. You are right in saying the Kahan summation does not work in all the cases but a better alternative is available which solves the problem. Please see for details. Also, since this method is likely to reduce the efficiency of summation, I think it would make sense to implement it as a separate function. Proposed solution in:- reduction-in/diff Created attachment 891 [details] Proof-of-concept tree reduction I attached a proof-of-concept tree reduction. It misses handling of the last elements (currently it only works correctly for sizes which are multiples of 64), and just implements Packet4f-based addition. However, it is not only more accurate (at least for sufficiently large sizes), but also faster than the current implementation (as long as the input fits into the cache). Without properly handling the last elements, this comparison is of course slightly unfair. Examples (Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz): $ ./a.out 128 5000 double: 10.0117, diff redux: 6.55651e-07, diff tree: 6.55651e-07 GFLOPS: redux_best: 6.74629, tree_best: 8.00961 double: 1.69471, diff redux: 3.57628e-07, diff tree: 0 GFLOPS: redux_best: 6.74629, tree_best: 8.00961 double: 3.73089, diff redux: -5.36442e-07, diff tree: 8.9407e-07 GFLOPS: redux_best: 6.74664, tree_best: 8.00961 ... $ ./a.out 1280000 500 double: -66.3165, diff redux: 0.00162375, diff tree: -8.52346e-05 GFLOPS: redux_best: 5.42728, tree_best: 5.59063 double: -350.063, diff redux: -0.0029037, diff tree: 5.65052e-05 GFLOPS: redux_best: 5.43354, tree_best: 5.59152 ... Note that your reduce<> is equivalent to the already implemented redux_vec_unroller. -- GitLab Migration Automatic Message -- This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance:.
https://eigen.tuxfamily.org/bz/show_bug.cgi?id=1438
CC-MAIN-2020-16
refinedweb
658
61.43
marko29 0 Posted January 1, 2011 Share Posted January 1, 2011 (edited) It is a great thing to handle already awesome winapi dll files like some of tutorials show you already but what about using your own dll files?To spice up your imagination, if you could handle your own dll code and incorporate it to AutoIt, there are simply no limits!I dont see it covered anywhere so i thought to start this with a simple tutorial how to create your basic dll in c/c++ then hopefully some guys will jump in and contributeIf you are new to dll files, this post will show you how to create basic Dll.So i say great, lets test some basic dll file, open up visual studio(anything else should be fine unless you are complete noob) and lets define few things, first thing first // note: i changed the code a bit so pictures might show different code Chose New Project- Chose Win32ConsoleApplication- Enter desired name, click NextPick that you want to Create Dll File- Make sure you select Dll- Make sure you select Empty ProjectFirst We Create header file- Right Click on "Header Files" folder icon and chose "Add New", then lets pick "basicdll" as the nameSimply paste this code into the header file#ifndef _DLL_TUTORIAL_H_ #define _DLL_TUTORIAL_H_ // This is basically to tell compiler that we want to include this code only once(incase duplication occurs), you include it in .cpp file(which we will soon do) #include <iostream> // Inlcude basic c++ standard library output header(this is used for output text on the screen in our code) #define DECLDIR __declspec(dllexport) // We #define __declspec(dllexport) as DECLDIR macro so when compiler sees DECLDIR it will just take it as __declspec(dllexport)(say that DECLDIR is for readability) // Note that there is also __declspec(dllimport) for internal linking with others programs but since we just need to export our functions so we can use them with Autoit, thats all we need! extern "C" // this extern just wraps things up to make sure things work with C and not only C++ { // Here we declare our functions(as we do with normal functions in c header files, our cpp file will define them(later). Remember that DECLDIR is just a shortcut for __declspec(dllexport) DECLDIR int Add( int a, int b ); DECLDIR void Function( void ); } #endif // closing #ifndef _DLL_TUTORIAL_H_- Save it up and move onAnd finally create .cpp file - Right Click on the "Source Files" folder icon, chose add new, pick "basicdll" as the name and follow on ...- Again just paste the .cpp file code...#include <iostream> #include <Windows.h> // this is where MessageBox function resides(Autoit also uses this function) #include "basicdll.h" // We include our header file which contains function declarations and other instructions extern "C" // Again this extern just solves c/c++ compatability [b]Make sure to use it because Autoit wont open dll for usage without it![/b] { DECLDIR int Add( int a, int b ) { return( a + b ); // Finally we define our basic function that just is a sum of 2 ints and hence returns int(c/c++ type) } DECLDIR void Function( void ) { std::cout << "DLL Called! Hello AutoIt!" << std::endl; // This silly function will just output in our Autoit script "Dll Called!" MessageBox(0, TEXT("DLL from Autoit"), TEXT("Simple Dll..."),0); // Use good old Message Box inside Windows Api Functions but this time with your own dll } }And thats it, we are done with configuration of header and source and we can compile our DLL file for later usage(note that you must set configuration properties of your project to use Unicode character set (so TEXT() macro in our .cpp works well)(Autoit)- Right Click on the project/solution or just pick Build from menu and select build project/solution and get to here...Navigate to your project dir(should be in debug folder) and there is your basic dll file with 2 simple functions to use! So lets use them Copy/paste your basicdll.dll file into the dir of your script, now lets use it with autoitLets use this simple function we named "Function", open up autoit script, place dll file inside same folder$dll = DllOpen("basicdll.dll") DllCall($dll, "none", "Function")Results:You should also get a MessageBox but this time you will let AutoIt rest and call it with your own dll to access WinApi by itself.Not everything is always sexy and perfect, so if you just think puting "int" as return type for our Add() function, you are wrong. Our function returns int type indeed but we need to modify it a bit.NOTE - this Add function crushes autoit script, so instead pointing "int" as our return type like this... DllCall($dll, "int", "Add", "int", 3, "int", 5)We need to do this: DllCall($dll, "int:cdecl", "Add", "int", 3, "int", 5)if you can combine the power of language like c/c++ with simplicity of Autoit by creating our own .dll's and not just using the winapi dlls and other prebuilt goodies there is no limit of what you could do with Autoit, hopefully this will be your starting point to show you that c++ is indeed worth to learn!Thats it for now, this tutorial is based on (source that got me into dll files), i hope you will learn from this and everyone can benefit on the end. Happy New Year and Cheers to the forums! Edited January 22, 2011 by marko29 Link to post Share on other sites Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/123827-create-basic-dll-file-and-use-it-with-autoit/
CC-MAIN-2022-33
refinedweb
963
60.58
The stat module defines constants and functions for interpreting the results of os.stat(), os.fstat() and os.lstat() (if they exist). For complete details about the stat(), fstat() and lstat() calls, consult the documentation for your system. Changed in version 3.4: The stat module is backed by a C implementation. The stat module defines the following functions to test for specific file types:. Return non-zero if the mode is from a door. New in version 3.4. Return non-zero if the mode is from an event port. New in version 3.4. Return non-zero if the mode is from a whiteout. New in version 3.4.).. Example: import os, sys from stat import * def walktree(top, callback): '''recursively descend the directory tree rooted at top, calling the callback function for each regular file''' for f in os.listdir(top): pathname = os.path.join(top,) An additional utility function is provided to covert a file’s mode in a human readable string: Convert a file’s mode to a string of the form ‘-rwxrwxrwx’. New in version 3.3. All the variables below are simply symbolic indexes into the 10-tuple returned by os.stat(), os.fstat() or os.lstat().). The interpretation of “file size” changes according to the file type. For plain files this is the size of the file in bytes. For FIFOs and sockets under most flavors of Unix (including Linux in particular), the “size” is the number of bytes waiting to be read at the time of the call to os.stat(), os.fstat(), or os.lstat(); this can sometimes be useful, especially for polling one of these special files after a non-blocking open. The meaning of the size field for other character and block devices varies more, depending on the implementation of the underlying system call. The variables below define the flags used in the ST_MODE field. Use of the functions above is more portable than use of the first set of flags: Socket. Symbolic link. Regular file. Block device. Directory. Character device. FIFO. Door. New in version 3.4. Event port. New in version 3.4. Whiteout. New in version 3.4. Note S_IFDOOR, S_IFPORT or S_IFWHT are defined as 0 when the platform does not have support for the file types.. The following flags can be used in the flags argument of os.chflags(): Do not dump the file. The file may not be changed. The file may only be appended to. The directory is opaque when viewed through a union stack. The file may not be renamed or deleted. The file is stored compressed (Mac OS X 10.6+). The file should not be displayed in a GUI (Mac OS X 10.5+). The file may be archived. The file may not be changed. The file may only be appended to. The file may not be renamed or deleted. The file is a snapshot file. See the *BSD or Mac OS systems man page chflags(2) for more information.
http://www.wingware.com/psupport/python-manual/3.4/library/stat.html
CC-MAIN-2014-42
refinedweb
502
77.84
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Unreadable attached filename recieved by mail Hello, when i try to test Supplier vs Client (OpenERP) collaboration by mail i get message with the invoice connected to it. But that attached filename is made unreadable (i.e. Test.pdf transformed to =-UTF-8-b-0KHRh9C10YIgMzc5LnBkZg==-= (1)). Any ideas how can i solve this problem? I found that post on old forum: h**p://forum.openerp.com/forum/topic34484.html It says that:"...When I use the fetchmail module to collect mails of applicants for HR recruitment the attachment filenames sometimes are not decoded. (Like this: =?UTF-8?B?w4FSVsONWlTFsFLFkF9Uw5xLw5ZSRsOaUsOTR8OJUC5kb2N4?= ) It is problem for us because the attachments are unreadable by simple click with the default applications (Adobe Reader, Word...). I think it is repairable in the mail_thread.py (message_append function) module using the following patch: from email.header import decode_header ... for attachment in attachments: fname, fcontent = attachment fname, encoding = decode_header(fname)[0]" I have exactly same problem, but i can't find mail_thread.py file, may be this solution is made for earlier version?? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now Sorry no specific advice other than "have you tried updating the 'mail' addons" - solved countless problems myself this way! Cameron, thank you for your answer. I'm novice in OpenERP, can you show me a right way how to do an update of a module? I've installed the last version a week ago... For what its worth - Settings>Modules>Installed Modules Search Mail Select Email Gateway Select Upgrade - Do Base as well! Have you tried saving the attachment and opening it directly? Now i see the origin of problem. All is all right if the filename has latin symbols. If i try to send attachment with cyrillic filename, then i get it unreadable in OpenERP messages dashboard. I can download it and rename, after that it will open. But that's not good for users... So something is going wrong with message attachment filename. Some decoding functions not works as they can i think. Boris, did You solve this problem? I can't find answer too.
https://www.odoo.com/forum/help-1/question/unreadable-attached-filename-recieved-by-mail-31612
CC-MAIN-2018-39
refinedweb
396
59.6
If you remember, I finished my tour of Mac OS options for finding all processes using a particular tty with the following: The only bummer is that Linux's sysctl.h doesn't include KERN_PROC_TTY, so I guess we'll have to grub around in /proc or call lsof(1) there.The easier option for Linux, it seemed, was to use lsof(1). It was pretty slow at around 240ms on my Opteron compared to Mac OS' 7ms on my dual G5, but that seemed just about bearable. 300ms seems to be about the point where users at my level of impatience notice a delay. What I didn't think about, though, is that most machines aren't very lightly loaded Opterons. A Pentium 4 with a lot more processes was regularly taking about 0.5s, which was noticeable. And then the complaints started coming in of times over a second on another Pentium 4. lsof(1) doesn't scale well. I knocked up a little Ruby script proof of concept to see how well grubbing around in /proc would work: #!/usr/bin/ruby -w if ARGV.length() != 1 $stderr.puts("usage: lsof.rb <absolute-filename>") exit(1) end filename = ARGV[0] def has_file_open(pid, filename) Dir["/proc/#{pid}/fd/*"].each() { |fd_file| begin linked_to_file = File.readlink(fd_file) if filename == linked_to_file return true end rescue # Ignore errors. end } return false end pids = [] Dir.chdir("/proc") Dir["[0-9]*"].each() { |pid| if File.stat("/proc/#{pid}/fd").readable?() if has_file_open(pid, filename) pids << pid end end } names = [] pids.sort().uniq().each() { |pid| # Extract the "(name) " field from /proc/<pid>/stat. name = IO.readlines("/proc/#{pid}/stat", " ")[1] # Rewrite it as "name(pid)". names << name.sub(/^\((.*)\) $/) { |s| "#$1(#{pid})" } } puts(names.join(", ")) exit(0) This was significantly faster, and had the advantage of working on Cygwin, which doesn't ship with lsof(1) but does have a sufficiently compatible /proc. Even on Cygwin it was only taking about 70ms. On Linux (on the Pentium 4) it was down around 40ms. The killer though, that gave me sufficient impetus to actually make the change, is that lsof(1) can hang if your Linux box has a hung mount. Bad enough that it was taking over a second (on the event dispatch thread!), but that it could sometimes just go away and never come back... lsof(1) doesn't play nice with network file systems. A quick rewrite of my Ruby in C++ later, and there's no danger of this part of Terminator hanging over a hung mount. We get our result in exactly the form we want in 20ms (on the Pentium 4 Linux machine). The Opteron is down to around 10ms: the same as the dual G5's Mac OS sysctl(3). Why did I rewrite the script in C++ rather than just call out? No particularly good reason. I didn't really want to start a new process when the user's probably trying to kill something for the same reason that shells tend to have kill(1) built in. But really it came about because I initially thought "there's no reason not to do this in Java", and then realized that, actually, file system access is one of Java's worst foibles. Maybe I'm overly sensitive about that, given how much of my life is taken up with file system performance, but Java also has huge functionality gaps when it comes to the file system. Don't even talk to me about symbolic links! The C++ was easy, less verbose than the equivalent Java, and roughly as verbose as the equivalent Ruby. A good POSIX C++ binding would have made things even better. Anyway, the users are quiet again, so I can go back in my box.
http://elliotth.blogspot.com/2006/01/how-does-terminator-know-what.html
CC-MAIN-2017-04
refinedweb
629
74.19
say ExternalMultiply(2, 3, 4, 5, 6) 29 December 2015 Regina Rexx is a very complete and well-documented implementation of the Rexx programming language. Unfortunately its documentation is very much reference documentation, a problem that makes using rarer facilities more difficult. One such facility is the whole foreign language interface. Regina follows IBM’s SAA API, but this is an API that is not particularly popular outside of IBM’s notoriously inward-looking world. It is definitively not an API that is well-endowed with end-user documentation, especially in its Rexx integration side of things. There are many examples of Rexx libraries which expose facilities to Rexx users that can be looked at, but unfortunately almost all of them rely on compatibility shims that make it very hard to piece together what’s actually going on when you’re learning. Further, the compatibility shims are a layer of unnecessary obfuscation if you, like me, only ever really intend to use Regina Rexx in your code. Because of these deficiencies in accessible documentation I am embarking on a small series of blog entries to ease entry into the Rexx extension field. Today’s outing will introduce simply adding a single external function to Rexx’s capabilities. For this baby step into extending Rexx we will be exposing a single, simple function: ExternalMultiply(). ExternalMultiply() will accept any number of integer numbers (only in the range of what will fit into a C long) and multiply them together, returning the product (where the product will fit into a C long long). For example the following line will print the number '720': say ExternalMultiply(2, 3, 4, 5, 6) Complete source for the example, a test driver, and a build script are provided at the end of this blog entry. There are some concepts you’ll have to get used to first before extending Rexx. Everything in Rexx is a string. Behind the scenes things may not be so simple-mindedly implemented, but at any public level of visibility Rexx values are all strings. This implies that all of your functions will have a set of strings as parameters. Rexx is not based on the assumption that C is the lingua franca of computing. Many languages have assumptions that are thinly veiled C assumptions. Numbers are based on C numberical types (usually long for integers and double for floating point, for example). Strings are NUL-terminated arrays of byte-sized characters. Rexx is not based on this assumption since it, as a language, predates the C-as-lingua-franca era. As a result you will be needing to understand Rexx’s exposed data types and you will spend a lot of time converting back and forth between them and C’s. There is a lot of boilerplate in making FFI for most languages. Rexx is no exception. Here are some of the things you’ll have to do in most function-exposing modules. rexxsaa.h All of the Regina API is specified in a single included header: rexxsaa.h. Unusually for such a system, it is not enough to merely include the header. You have to activate specific subsystems before including it. This is done by using #define of relevant symbols before inclusion: #define INCL_RXFUNC #include <rexxsaa.h>#include <rexxsaa.h> When you include rexxsaa.h with the relevant symbols defined you are given access to the prototypes, data types, and symbols for the subsystem interface you wish to use. In our case we are given access to the prototypes, data types, and symbols for the external function interface API. Of course you’ll have to declare the function that’s being exported. The plain version of this looks something like: APIRET APIENTRY ExternalMultiply(PCSZ, ULONG, PRXSTRING, PCSZ, PRXSTRING); That’s quite a mouthful, however, and would get tedious to type for each and every exported function. Thankfully the Regina API gives you a nice typedef for it: RexxFunctionHandler ExternalMultiply; If you must you can go ahead and use the repetitive, long-winded version but I really strongly recommend using RexxFunctionHandler instead. Of course the types involved have to be known. APIENTRY is something you place there "just because". (It has to do with linkage types on OS/2 and Windows. Putting it in the signature makes sure that your code will work on OS/2 and Windows environments.) APIRET is an alias for ULONG. ULONG is an unsigned long alias. PCSZ is a typedef for a pointer to a C-style ( NUL-terminated) string. PRXSTRING is a pointer to an RXSTRING. RXSTRING is the kicker. That’s the representation Regina exposes for its internal string values. It is defined as: typedef struct { unsigned char *strptr; unsigned long strlength;unsigned long strlength; } RXSTRING;} RXSTRING; In addition to the above, there are also two values which need defining (for readability): #define RX_OK 0 #define RX_ERROR 1 The Regina APIs all want a return value of 0 for "worked fine" and a return value that is non-zero for "failed somehow". Note that this is not the return value that the function itself returns to the script! This is the return value the function returns to the interpreter to tell it whether the function call was a success or not. When you return non-zero, Rexx’s conditions mechanisms leap into action, signalling or calling error handlers as appropriate. Of course in any non-trivial function package you’ll have to declare helper functions and other such things. This is a trivial function package, but for show here are two helpers: static long rexx_to_long(RXSTRING); static void long_long_to_rexx(long long, PRXSTRING);static void long_long_to_rexx(long long, PRXSTRING); Now that we’ve declared everything of interest, we need to implement the functionality. Let’s start by looking at the);} static void long_long_to_rexx(long long val, PRXSTRING rexxval) { sprintf(RXSTRPTR(*rexxval), "%lld", val); rexxval->strlength = strlen(RXSTRPTR(*rexxval)); }rexxval->strlength = strlen(RXSTRPTR(*rexxval)); } The first thing that the observant reader will spot is that this code is not very safe! This is because it is trying to illustrate the concepts of the API without burying them underneath a pile of security boilerplate. Serious code would make use of proper techniques including properly framing the strtol() call with an end pointer, checking for errors, and generally not being a one-liner. (This is why I recommended building up a conversion library earlier; there’s a lot of potential boilerplate overhead that you’re not going to want to type repeatedly.) That being said, Regina in particular will pass, for numbers, C-style strings in the strptr member, so use of C-style string manipulation functions is fine for demonstration purposes. The second flaw is a bit more reasonable. In the context of Regina this is not a flaw at all. Regina allocates a 256-character string for return values. This is documented and fixed. What is at issue is if all interpreters do this. If portability across interpreters is not your concern, then using the presupplied buffer is fine. Of course it goes without saying (but I will say it anyway) that you’ll need to include the appropriate C library headers for any of this to work: #include <stdio.h> #include <stdlib.h> #include <string.h>; } This is the function that Rexx scripts will actually directly call. It is actually quite straightforward, but has some unexpected issues. Let’s look at the parameters passed in one at a time: PCSZ name This is a C-style string containing the name of the function being called. Why is this needed? Because it’s possible to have a single registered function entry point that implements several related functions. (This is not something I’d personally recommend, but it’s something you can do!) We ignore this in our code. ULONG argc Your function will have argc `RXSTRING`s as arguments. PRXSTRING argv An array of argc `RXSTRING`s. Your arguments, in short. PCSZ queuename A C-style string containing the name of the current data queue. This is out of scope for this tutorial (and out of scope for most code!). PRXSTRING returnstring This points to a single RXSTRING which is used to return a value to the caller. The special variable RESULT will be set to this value on return. By default Regina supplies a ready-made 256-byte buffer in this pointer that you can use to set up your return value. This may not be portable. The rest of the code is straightforward. product is initialized to 1. Each passed-in argument is converted into a long and multiplied in place with product. When the arguments have all been processed, product is converted into the RXSTRING pointed at by returnstring. RX_OK (0) is returned then to signal that everything is hunky dory. On the Rexx side, the interpreter must first be informed of the existence of the exported function and its location. This is done in this chunk of code: if RxFuncAdd('ExternalMultiply', 'external', 'ExternalMultiply') <> 0 then do say RxFuncErrMsg() exit E_ERROR end The key code is the call to RxFuncAdd(). It maps the first argument (internal name of the function) to the third argument (external name of the function) in the library named by the second argument. In this case we’re calling the function ExternalMultiply in Rexx, although it will be callable as externalMultiply or even ExTeRnAlMuLtIpLy—Rexx is case insensitive). The external name is ExternalMultiply, the name you exported the function by. The library name is 'external' which, in a Linux environment will have 'lib' prepended and '.so' appended, so in this case it will be looking for 'libexternal.so'. Again this will change by enviroment. After all of this, calling the function is an anticlimax. ExternalMultiply is now used just like any Rexx BIF: say ExternalMultiply(2, 3, 4, 5, 6) product = 1 do i = 100 to 1000 by 145 product = ExternalMultiply(product, i) end say product Of course there will be some issues relating to C type limitations. Rexx has arbitrary-precision arithmetic that doesn’t wrap. Most C implementations will have 64-bit long long values that will wrap when overflowed. This particular code will, as a result, not be seamless. Of course this function is trivial, not particularly well-matched to Rexx, and not very safe. Using it will not give the programmer the feeling that they’re using something intended for Rexx. Here are some improvements that could be made. Use secure code. The endptr argument to strtol() should be used instead of assuming that the number passed by Regina will be NUL-terminated. Allocate a local buffer for returnstring and use that instead of the Regina-provided one. (Don’t worry: you won’t leak. If you change the returnstring->strptr member, Regina will deallocate it for you when finished using it.) Rexx numbers are arbitrary precision decimal representations. (Indeed they are the inspiration and much of the design behind the more recent IEEE 754 decimal format!) They are not like C’s float or double types and they are not like C’s integer forms, unsigned or otherwise. Using something like this decimal representation package instead of C’s native types is probably smart idea. (Actually ISO/IEC TS 18661-2 enhances ISO C with decimal floating point support. Catch up!) The implementation of ExternalMultiply() doesn’t check any of the input for validity. Nor do its helper functions. There’s no check for 0 values, so no short-circuit return of 0 at need. There’s no check that the answer will overflow the returnstring buffer (although with a long long that is not a meaningful risk). Proper code will check all of this. Do that when writing real code. The following blocks contain the full source code for the external function implementation, the test driver, as well as a simple build script usable in a Linux environment. They should serve as a good basis for making a proper, useful Rexx extension library. /* external.c */ #include <stdio.h> #include <stdlib.h> #include <string.h> #define INCL_RXFUNC #include <rexxsaa.h> /* helper function declarations */ static long rexx_to_long(RXSTRING); static void long_long_to_rexx(long long, PRXSTRING); /* external API function declarations */ RexxFunctionHandler ExternalMultiply; /* symbolic return values */ #define RX_OK 0 #define RX_ERROR 1 /* external API functions */); rexxval->strlength = strlen(RXSTRPTR(*rexxval)); } /* test-external.rx */ E_OK = 0 E_SYNTAX = 1 E_ERROR = 2 E_FAILURE = 3 E_HALT = 4 E_NOTREADY = 5 E_NOVALUE = 6 E_LOSTDIGITS = 7 E_UNKNOWN = 255 signal on syntax name error signal on error name error signal on failure name error signal on halt name error signal on notready name error signal on novalue name error signal on lostdigits name error if RxFuncAdd('ExternalMultiply', 'external', 'ExternalMultiply') <> 0 then do say RxFuncErrMsg() exit E_ERROR end say 'ExternalMultiply(2, 3, 4, 5, 6) returned' ExternalMultiply(2, 3, 4, 5, 6) exit E_OK error: type = condition('C') if condition('I') = 'SIGNAL' then say 'Error' type || '(' || rc || ') signalled on line' sigl || '.' else say 'Error' type || '(' || rc || ') called on line' sigl || '.' say 'Description:' condition('D') select when type = 'SYNTAX' then code = E_SYNTAX when type = 'ERROR' then code = E_ERROR when type = 'FAILURE' then code = E_FAILURE when type = 'HALT' then code = E_HALT when type = 'NOTREADY' then code = E_NOTREADY when type = 'NOVALUE' then code = E_NOVALUE when type = 'LOSTDIGITS' then code = E_LOSTDIGITS otherwise code = E_UNKNOWN end exit code /* build-external.rx */ 'gcc -shared -fpic -o libexternal.so external.c'
http://chiselapp.com/user/ttmrichter/repository/gng/doc/trunk/output/blog/2015/02-rexx-external.html
CC-MAIN-2017-04
refinedweb
2,226
54.93
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Hi Agnes,=20 Your problem is the same as the previous couple of posts. In the example the pre-saxon.7 namespace is used: however because you are using saxon7.x you will need to use the new namespace: Its worth having a read of=20 as many of the extension functions you are trying to use are acutally implemented in xslt2.0/xpath2.0 and therefore dont exist anymore. Your best bet if you are following through the book is to use saxon6.5.2 cheers andrew > -----Original Message----- > From: Kielen, Agnes [mailto:Agnes.Kielen@...] > Sent: 29 November 2002 09:29 > To: Saxon Help (E-mail) > Subject: [saxon] saxon:evaluate failure >=20 >=20 > Hello, >=20 > Im trying the saxon:evaluate example in Michael's book at=20 > page page 787. > When I run the calls.xsl stylesheet I get the following error: >=20 > Error at xsl:variable on line 33 of file:/C:/saxon/calls.xsl: > No implementation of function saxon:evaluate is available > Transformation failed: Run-time errors were reported >=20 > I'm using saxon 7.1 at windows2000. What do I wrong? >=20 > Agnes >=20 >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Get the new Palm Tungsten T=20 > handheld. Power & Color in a compact size!=20 > > _______________________________________________ > saxon-help mailing list > saxon-help@... > >=20 >=20 >=20 Mike said: I'm using saxon 7.1 at windows2000. What do I wrong? > > > As Andrew pointed out, the Saxon namespace changed in version 7.x > > I did this for a couple of reasons. Firstly, as ICL no longer had any > active involvement in Saxon development, it seemed appropriate to lose > the icl.com reference. Secondly, 7.x replaced many of the Saxon > extensions with standard XSLT facilities, so I reckoned a change in > namespace would encourage people to look at the extensions they were > using and migrate to standard features where possible. I hope this isn't going to cause you to do with 6.5 to 7 what you've been doing on the XSLT list since 1999? No, that's not XSLT, that's Microsofts 1998 implentation, the namespace has changed, could become, No, that's not XSLT 2.0, That's my XSLT 1.0, the namespace has:
http://sourceforge.net/p/saxon/mailman/saxon-help/thread/63C4AD0365821F4291ACF76C0672FA3506EFA6@piper6.piper-group.int/
CC-MAIN-2014-10
refinedweb
411
68.06
Thread pool in Java In Java, a thread pool is a group of threads that we can reuse for various tasks in a multithreading environment. The thread pool may be of a fixed size where each thread performs a task and once it completes it again goes back to the pool. In this way, each thread can execute more than 1 task. This reduces the overhead of creating multiple threads for different tasks. We can also restrict the number of active threads by creating a fixed-size thread pool. Java contains an in-built thread pool called ThreadPoolExecutor which we will discuss in detail in this article. Advantages of a Thread pool - Better performance - Less resource required - Reduced overhead - More responsive Disadvantages of a Thread pool - Chances of deadlock - Thread leakage - Resource thrashing ThreadPoolExecutor Java has an in-built thread pool class named ThreadPoolExecutor that implements the Executor interface and ExecutorService subinterface. To use this, we need to implement the Runnable object and pass it to the Executor which will execute the task. The ExecutorService object contains the set of tasks that we want to process. The Java thread pool initializes the size of the pool and the threads sequentially execute the tasks as and when it is free. In the above illustration, we have a ThreadBolockingQueue with 5 tasks waiting in line for execution. The thread pool has a fixed size of 3 threads. Each thread executes 1 task at a time and when a thread becomes free, it sequentially takes the next available task for execution. You can see in the below diagram that Thread 1,2 and 3 execute Task 1,2 and 3. Meanwhile, Task 4 and 5 waits in the queue until any of the threads complete the execution. When any thread is idle, it picks up the next task from the queue and completes the execution. Methods of Executor Below are the main methods of an Executor class that is part of the java.util.concurrent.Executors interface. Methods of the ThreadPoolExecutor Below are the most commonly used methods of the ThreadPoolExecutor class. Example: FixedThreadPool Below is a simple example of using the newFixedThreadPool method of the ThreadPoolExecutor class. The SampleTask class performs the task where it prints the task name and some random value. The FixedThreadPoolDemo class is the main class where we execute all the tasks using a fixed size thread pool in Java. We create a thread pool using the newFixedThreadPool method that initializes the size to 3. We then create 5 tasks where these 3 threads execute all the 5 tasks as and when the thread is available. import java.util.concurrent.Executors; import java.util.concurrent.ThreadPoolExecutor; class SampleTask implements Runnable { private String name; public SampleTask(String name) { this.name = name; } public String getTaskName() { return name; } //Task public void run() { try { System.out.println("Executing Task: " + name); int rand = (int)(Math.random()*15); System.out.println("Random value for " + name + ": " + rand); } catch(Exception e) { e.printStackTrace(); } } } public class FixedThreadPoolDemo { public static void main(String[] args) { //Create a thread pool with 3 threads ThreadPoolExecutor ex = (ThreadPoolExecutor)Executors.newFixedThreadPool(3); //Execute 5 tasks using 3 threads for(int i=1;i<=5;i++) { SampleTask t = new SampleTask("Task " + i); System.out.println("Task started: "+ t.getTaskName()); ex.execute(t); } ex.shutdown(); } } Task started: Task 1 Task started: Task 2 Task started: Task 3 Executing Task: Task 1 Executing Task: Task 2 Task started: Task 4 Executing Task: Task 3 Task started: Task 5 Random value for Task 1: 6 Random value for Task 2: 13 Executing Task: Task 4 Random value for Task 3: 11 Random value for Task 4: 1 Executing Task: Task 5 Random value for Task 5: 7 Example: ScheduledThreadPoolExecutor Whenever we want to execute the same task repeatedly after a fixed delay, we can use the newScheduledThreadPoolExecutor method of the ThreadPoolExecutor class. In the below example, we create a scheduled Java thread pool of size 3 and repeatedly execute the same task that we define in the TaskDemo class every 3 seconds. The TaskDemo class prints the current execution time so that we can see the delay in execution. import java.util.Date; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class ScheduledThreadPoolDemo { public static void main(String[] args) { ScheduledThreadPoolExecutor se = (ScheduledThreadPoolExecutor)Executors.newScheduledThreadPool(3); TaskDemo t = new TaskDemo("task"); System.out.println("Task created"); se.scheduleWithFixedDelay(t, 3, 3, TimeUnit.SECONDS); } } class TaskDemo implements Runnable { private String name; public TaskDemo(String name) { this.name = name; } public String getTaskName() { return name; } public void run() { System.out.println("Executing " + name + " at current time: " + new Date()); } } Task created Executing task at current time: Fri Feb 12 06:47:58 IST 2021 Executing task at current time: Fri Feb 12 06:48:02 IST 2021 Executing task at current time: Fri Feb 12 06:48:05 IST 2021 Executing task at current time: Fri Feb 12 06:48:08 IST 2021 Executing task at current time: Fri Feb 12 06:48:11 IST 2021
https://www.tutorialcup.com/java/thread-pool-in-java.htm
CC-MAIN-2021-49
refinedweb
842
53.92
suppose I have the pd.Series import pandas as pd import numpy as np s = pd.Series(np.arange(10), list('abcdefghij')) a 0 f 5 b 1 g 6 c 2 h 7 d 3 i 8 e 4 j 9 dtype: int32 One way would be to reshape and then use ravel(order='F') to read the items off in Fortran order: In [12]: pd.Series(s.values.reshape(2,-1).ravel(order='F'), s.index) Out[12]: a 0 b 5 c 1 d 6 e 2 f 7 g 3 h 8 i 4 j 9 dtype: int64 Fortran order makes the left-most axis increment fastest. So in a 2D array the items go down the rows of one column before progressing to the next column. This has the effect of interleaving items, compared to the usual C-order. Alternatively, you could reshape in Fortran order and ravel in C order: In [17]: s.values.reshape(-1, 2, order='F').ravel() Out[17]: array([0, 5, 1, 6, 2, 7, 3, 8, 4, 9])
https://codedump.io/share/b25WDlA7wGBc/1/how-do-i-card-shuffle-a-pandas-series-quickly
CC-MAIN-2017-04
refinedweb
179
71.34
I use : - Code: Select all import sys sys.path.append("c:\users\me\documents\pythondev") or - Code: Select all import sys sys.path.append("") (as I have my module in my current working directory) But when I import it via : - Code: Select all import myMod it just seems to run then entire code.( I also sometimes get auto completion listing the functions within idle with myMod. but this seems intermittent.) Another thing I tried was just importing the functions with - Code: Select all from myMod import func1 Basically what I want to be able to do is have access to functions in another program by importing it. Is this possible? Does this only work with code written in a format that changes them from a program to an actual module. So it would not work to just import a previously written program that happens to have functions in it? appreciate for any help with this Regards
http://www.python-forum.org/viewtopic.php?p=13153
CC-MAIN-2015-35
refinedweb
156
65.01
Chapter 4: Glass Breaking Chapter 4 begins laying out the case against Java: ." In this chapter, Tate looks at what he considers Java's worst problems today: over-complexity and its many sources: * Frameworks that make life easier for experienced developers, at the expense of approachability for new developers * Over-reliance on XML, caused by Java's inability to represent structured data * "Compromises", like primitives, which were introduced to lure the C++ crowd * A long compile/deploy cycle * The costs of strong, static typing * Over-reliance on tools to mitigate complexity ." > The point I guess is that Bruce's original assertion that > static typing sucks, is just as much chest beating as my > assertion that it doesn't. I'm not sure that I agree with Bruce that static typing sucks, but the primary advantage that static typing provides that dynamic typing doesn't is performance. However, the performance difference has grown smaller (the difference between VisualWorks Smalltalk and Sun Java is minor) at the same time as computers have become faster, making it unimportant for many applications. The performance reason still influences today when it's no longer such a compelling logical argument, but the more difficult issues a new language must address are that programmers are conservative about language choice and managers are strong followers of large corporate marketing campaigns (hence the popularity of C#.) The main advantage of dynamic typing is less complexity and increased speed of development, so the question is the old one of programmer time versus machine time. Many programmers sacrificed machine time for programmer time in moving from C++ to Java or C#, so the choice has be made in favor of improved programmer productivity in the past and likely will be again in the future.. It's also worth pointing out that two of the languages above are staticly typed but not explicitly typed. Both Haskell and OCaml use type inferencing to determine types, allowing generics to be expressed with none of the effort required in C++ or Java. While I don't expect a major migration to OCaml or Haskell, type inferencing will become widely discussed as it has been announced as a feature of C# 3.0. > [...] but the more difficult issues a new language must > address are that programmers are conservative about > language choice and managers are strong followers of > large corporate marketing campaigns (hence the > popularity of C#.) Note that there have been transitions in the past despite those factors, eg my vaguely historical timeline where I talk about COBOL being replaced by VB which is then replaced by Java. > [...] The main advantage of dynamic typing is less > complexity And yet we continually see offhand comments about how for instance Lisp is too hard for the average developer. An interesting contradiction. That it is 'less complex' and at the same time 'too hard'? People talking about the dynamic languages say that they allow a higher level of abstraction. Okay, fine. Java also allows some pretty high levels of abstraction through OO. That then leads into a discussion of 'domain specific languages'. But of course every time you define a new object class it is like adding a noun to the language, interfaces are like adjectives, and methods are like verbs. The problem with that analogy is that if I want to have a kick verb I can only kick things which I then add that method to, whereas in natural languges whenever a new verb is made up you can apply it to pretty much anything immediately. NB: Objective-C which is otherwise quite similar to Java takes the approach that method calls are like messages to the object (derived from Smalltalk I think) - and that you can send any message to any object. In that respect I suppose Objective-C is less strongly typed than Java. While it makes certain kinds of things easier (ie any object can be a proxy for any other object) it does not seem like that big a deal. Eg if you want it you can go out and find a c syntax language with that feature. I think that is where the debate would veer off into a discussion about functional languages vs procedural. Because I think that dynamic typing is just a smaller part of a larger argument. > [...]. You do a good job of laying out the case that the next big language will be one which continues the trend of sacrificing a little bit of runtime speed for a lot of developer productivity. But - it is an old saw that going with Java forces you to sacrifice execution speed. I'm not convinced that that is necessarily the case. After all, with hotspot and runtime profiling I understand that in some cases optimizations can be made at runtime which could not be made at compile time, which means that for some things Java actually runs faster than native code. For my own projects, I've always found Java to be blazingly fast. I admit that when I've worked with some other people's code that the performance has been less stellar - but in those cases it seems to me that the codebase has been 4-10x the size it actually needed to be - and I've seen the same thing said about Lisp, that it is easy for an unsophisticated approach to be hideously inefficient. So I'm not convinced that the languages you list are faster than Java. I think well written Java is 'fast enough'. But... at least two of those languages were around at the same time that not one, but two major language transitions occurred in the marketplace (when COBOL handed over the crown to VB, and then Java took it off VB) Why did the languages you list not seize the day on those occassions? The performance of VB for instance (being an interpreted language) was terrible... if those languages were: (a) less complex than Java (b) faster than Java Why weren't they the ones to push VB off the throne? It seems to me that the people going on about these other languages are like kids bragging about how their Dad can beat up your Dad - whilst their Dad is lying in hospital on a life support machine after having been beaten within an inch of his life. So why are dynamic/functional languages so unpopular? Is it because of conservatism? Hardly, as there have been numerous upheavals in the computer industry. In fact, it must be that they simply fail to offer any compelling advantages. The people who learn them at university simply don't go on to use them in the real world. Perhaps they are initially 'forced' to use other languages (like VB), but then they find that those languages are perfectly adequate, and over time the zeal for functional languages dies down (or withers on the vine or what have you). By analogy, I had a friend I introduced to Java who was a big fan of Pascal. Initially he complained about a lack of enums. Later on I found out he was using arrays for everything because he didn't trust other people's libraries for things like ArrayList and Vector (due largely to bad experiences with Microsofts IIS libraries). I convinced him to give the sun ones a chance. A couple of years later I caught up with him again and asked him about enums. He said that he didn't miss them any more, somewhat surprised I asked him why not - and he said that if you program properly in 'the Java way' you don't need them. > And yet we continually see offhand comments about how > for instance Lisp is too hard for the average developer. > > An interesting contradiction. That it is 'less complex' > and at the same time 'too hard'? Lisp is much easier to learn than Java. The syntax is so simple you can pick it up in an hour. Languages like Lisp (Scheme) work wonderfully to teach children how to program. The C-family of languages, including Java, are much harder for people to learn. The contradiction arises in the adult programmer who's familiar with one of more C like languages. For that adult, learning Java is easy because there's such a great similarity between the two languages. However, learning your first functional language, whether it's Lisp, OCaml, or Haskell, is going to be quite hard because it's a fundamentally different paradigm of programming. Most imperative programmers have a difficult time imagining programming without lists of statements that execute in order, variables, and loops, but that's what they're giving up in using a functional language. I need to leave now, so I'll have to answer the rest of your message later. But if these languages are 'so much better than Java' why aren't there more open source projects built around them? Other than Perl, Python and PHP all the other scripting languages put together amount to diddly squat. So I think there must be some factor (or factors) inhibiting the uptake of those languages.. So for an individual they might be highly productive (the porridge is just right), whereas for a group, unless they are all on the same wavelength, it will be much harder (the porridge will be too hot, cold, salty or sweet). Which would make them really good for tools, scripting style tasks and prototyping... but when you were going to share it around you'd pretty much have to choose another language. This could explain the Yahoo store phenomenon: originally written in Lisp in record time (allegedly) and then later on rewritten in Java because noone could maintain the Lisp (allegedly). > But if these languages are 'so much better than Java' > why aren't there more open source projects built > around them?. There may be a sweet point on the abstraction continuum, but it seems unlikely to me that a language as low level as Java is it. I think what we're seeing today is like the type of resistance that was presented by the goto users when structured programming arose, though it was nice to see Java 1.5 add a modern control structure (the foreach loop), though the syntax is not as clean as I wish it was. Language features have always evolved slowly. It took about a quarter century from Simula to C++ growing popular for object-oriented programming to become a part of most programmer's mindset. Java contains no feature that's newer than 1974. Computer languages change slowly because languages aren't just a tool; to some extent, they're what programmers think in. A computer language needs several things to become popular: 1. It needs to be the programming language of some software system that many people need to use, like C was the language of UNIX or Java with its focus on the web. Lisp had emacs, but emacs isn't sufficiently popular; Python and Zope have the same problem. Ruby may have this with Rails, but it's too early to tell. 2. It needs a free, high quality implementation. This is one point many of the dynamic languages lack to some degree, hence all the work on implementations like Jython and Iron Python and JRuby and Rite. 3. It needs marketing. A good book, and either corporate, university, or open source "hype." Java and C# had their corporate sponsors, but C and C++ became big through word of mouth without major support from AT&T. 4. It can't be too different from what people already know. This is why Ruby is more likely to be the next language than Smalltalk or Lisp, despite the fact that both of those languages are faster and more powerful than Ruby. >. Right, and Perl, Python and Ruby are so "brand new 1.0 versions". One thing that I think is wrong about this line of thinking is that dynamic languages and static ones are not excludent. They have existed for a long time, and both have their spaces, Java will never replace Perl, and Perl will never replace Java. Some people are not resisting to the idea NOT because they are affraid of changes, but because it makes no sense at all. (unnecessary slashdot-like-oversimplification: were the german jews concerned about Hitler rise to power for being "affraid of changes"? Not all changes are good ones.) Another thing, the "development cycle faster myth". That makes me wonder if people are using vi to develop in Java and Tomcat for running it. I'm not going to bash Tomcat because I used it for more time than I'd like to admit, but a real app server would be better. I hope Geronimo will save the opensource guys. I work in a project of considerable size and basically my routine for changes is: - Update a line a JSP: refresh the browser screen, 1 or 2secs; - Update a class: the local testing appserver gets it automatically, update the browser screen, 1 or 2 secs; - Use Struts components? Drag ad drop. - Create a Webservice? Click, click, there. - Mess with Struts config? Fill out a few nice user interfaces, no need to edit XML by hand. - JSTL? Drag and drop. - JSF? Drag and drop. - Build? Export the EARs and place them in the proper server. Now, if you have to drastically change Struts config at every new version, it's because you app is badly designed to begin with. Once it's initially set, little will change over time. Please, is Java that productivity killer? Only you're either using vi or Eclipse with no decent plug-ins. If so, it's programmer's fault, not language's. I agree, though, that the entry point for J2EE and Java in general should be made more straight forward with less "letter soup". Now, some people just don't learn. XML was way over hyped, and now some are saying that the excess of XML is bad. Could you please make up your mind? The next big crap is called AJAX (like "we weren't supposed to be building web applications anyway, so this is a broken workaround on a broken solution"). Please stop the Ruby madness while there's still time. Note: The first time I heard of "Java dominance" of something was when those "language critics" started shouting the (not so new because it's older than Java itself) big new thing. Isn't it convenient? It's like cheap propaganda for making it look like it has reached the peek and has nowhere to go from there but down. >... No, it doesn't. It comes from the much crap we read these days on the internet and websites like this one. "Perplexed developers with question marks above their heads" is a better definition I think. Not even in my worst dreams I see myself loosing my job because all systems from my company now run on Ruby instead of Java. It's like saying that all Windows sys admins will loose their jobs because tomorrow all servers will run Linux. BTW, Linux is what it is today because of big companies such as IBM. Who's the big company supporting Ruby? Who should worry about it is the Perl, Python and PHP crowd. Do you know how many websites run on PHP? Far more than JSP/J2EE. And there are lots of open source software based on PHP such as forums, CMS, etc around. Does it mean that PHP is killing Java? No. PHP is simpler and quicker for certain kinds of applications. I'm not a language geek, languages are a necessary evil, I have to express the logic somehow and the language does it, now listening to "Wow! this language gives me 10x all that much power" leads me to wonder: "How is this language different from the other 2513 computer languages ever invented so far!?" Buzz, hype, fanaticism, this is the fuel for the Ruby bubble. The next question I make to myself is: "There's not a language that's good for all things, they usually fit in niches. There are other that are more of a general use like Java, but that doesn't mean they are the better, what makes a language successful?" > Buzz, hype, fanaticism, this is the fuel for the Ruby > bubble. Well... surprisingly, "Buzz and Hype" were the things that put Java where it is today. If it hadn't been for "WORA" or "NCs + Applets will kill $COMPETITOR" etc, Java wouldn't have grabbed mindshare. And "fanaticism": surprisingly, I don't see that in the Ruby community, which is generally more relaxed than the Java crowd (TheServerSide, Javalobby, etc).. > And "fanaticism": surprisingly, I don't see that in > the Ruby community, which is generally more relaxed > than the Java crowd (TheServerSide, Javalobby, etc). It's me from the anonymous answer to you. Just to mention, I was called from all kinds of name from a gang of Ruby zealots another day in one forum, because I was questioning the value of the "must have" buzz on dynamic languages. Dynamic languages exist for a long time and today this is a kind of categorical imperative for "power" according to some. I don't remember such behavior from the Java crowd. The stranger thing, that was a JAVA forum. Now, why would Ruby developers be marketing Ruby in a JAVA forum other than fanaticism? I don't think Ruby developers are relaxed, at least not this new generation of ex-Java Ruby developers that have found the 8th wonder of the world in Ruby. Another thing that makes me wonder about the Nietzschean "will to power" unconscious desire in Ruby devs, carefully rationalized to look moral from the eyes of the community, are the silliest arguments.. Ok, that's your wish. I don't see anything plain wrong and I like the conservative approach that's used in Java. If compared to the "wild west open source" or the "right to innovate, screw the developers, micro$oft" style it's really better. I'd rather have platform improvements over syntactic sugar. Platform improvements would be anything related to VM, libraries, etc. >. What I was thinking of was that your IDE should be able to translate certain of the symbols (particularly keywords) into your own (human) language. From an English programmers point of view, I find warting and hungarian notation to be eyesores. However, I am open to the concept that if I didn't speak English, then most of the 'nicely named' methods in the APIs wouldn't mean much to me anyway, so I might be more open to languages making much greater usage of 'symbolic notation' instead of using words to describe the same thing. Perhaps a counter example to that would be the example of Forth, which I am told has a strong similarity to formal Japanese (eg their sentence structure is (kind of) stack based). And so you'd expect Forth to be the 'killer language' for Japanese developers, and yet for some reason, that appears to not be the case. Then again if closeness to a particular natural language counted for anything, then surely Basic would have been the killer language (arguably Visual Basic reigned supreme* till the late 90s when Java kicked it off the throne, and MS put the stake in its heart with their .net strategy). * With perhaps COBOL being the previous champion - both of these languages are 'similar' to English (certainly more so than, say, Perl or PHP :D But of course when we speak of Visual Basic and COBOL - it raises images of 'the great unwashed' the 'lowest common denominator'. One company I worked at as a junior developer in the 90s had about 70 VB and Oracle developers, and some C work came in, I was surprised when noone put up their hand for it. I put my hand up, did the job, and then conducted an informal survey amongst my co-workers about why noone else had wanted to even volunteer for it (and this was back when the university in that city taught C and not VB) - the consensus was that C was 'too hard'.. This is relevant for consideration in this context - because there are plenty of 'high falutin' languages that have their handful of rabid adherents, but most of them never go anywhere. So I suggest that 'the next big thing' is going to have to appeal to the lowest common denominator, or it will remain nothing but a boutique language. If the language is going to get big, being accessible to the masses is infinitely more important than how sexy the syntax of its closures is. So if we want to look for the next big thing, we should look at the other big languages, of which there are a handful: VB COBOL JAVA PERL (Where big = number of jobs/programmers, or lines of code, or numbers of legacy apps, these languages rate well above the boutique languages) Nothing else comes close (you could argue that embedded processors are a larger market than applications and add assembler to the top of that list - but I think the context of this discussion is applications programming). Perl of course is the antithesis of the argument, unlike the other three it does not pander to the masses by being easy, rather it is so difficult it is proverbial. Perhaps you could say that Perl is the language of systems administrators, and lump it into the 'not applications programming' category with assembler. That would drop the list down to: COBOL VB JAVA Also rearranged into historical order. So... what was good about those languages? COBOL was good for data processing. VB was good for quickly putting together GUI applications. Java was good for networked or distributed applications. In some respects Java suffers by comparison with VB, particularly when it comes to the ease of putting together a GUI. Now lets consider that in the context of 'the other Tiger' Mac OS X 10.4... We have fast good GUI design (XCode/Project Builder + binding), data processing with Core Data, networking with rendezvous/bonjour... So why hasn't GNUStep/OpenStep become 'cocoa for windows' and already become the next big thing? What is holding them back? Is Objective-C the anchor around its neck? Is it that it is an open source thing (no big economy company driving it)? > What I was thinking of was that your IDE should be > able to translate certain of the symbols > (particularly keywords) into your own (human) > language. This is a bad idea. Imagine yourself working on a global team with such "feature". Not to mention documentation. >. I have to agree partially. It's always good to learn other languages, but I don't think one language makes the developer a good developer. Usually C and Perl programmers write crappy Java code, because they try to either program C or Perl in Java, and complain that "Java doesn't have So, I'd say that to be a good developer one must learn the platform it develops on in order take the most advantage of it, and it comes only with experience. No C++ developer will ever do as good code in Java as an experienced Java developer. >). > As I understand it, while SmallTalk can optimise many uses to give equivalent performance to primitives it doesn't work efficiently for all types of code. So primitives are still necessary for some work and also for some environments where a sufficiently powerful JIT still doesn't fit. > As I understand it, while SmallTalk can optimise many > uses to give equivalent performance to primitives it > doesn't work efficiently for all types of code. Be that as it may, the rift between primitives and reference types is quite annoying. New Java devs always have problems with them ("How can I get a dynamic list of ints"? "You can't, use ArrayList." "But the compiler complains". "Well, of course, you'll have to wrap the values",... I don't want to know how often I had that discussion with people, mostly ending with me pointing them to the Apache Commons Primitives library etc.). What does that have to do with performance? Well, Java has always put developer productivity before raw performance. Actually... AutoBoxing and Unboxing is such a tradeoff; the newbie trying to stuff an int into a reference type based container, won't get an error message, and won't even know the difference... except for the bad performance (for large numbers). Again: I know that primitives were a necessary tradeoff, considering the first implementations of the JVM. >So primitives are still necessary for some work and also > for some environments where a sufficiently powerful > JIT still doesn't fit. You're right... although that area is shrinking too (current Sun MIDP JVMs already support parts of Hotspots capabilities). Not to mention, that old LISP, Smalltalk,... implementations also ran on - what we would call - limited platforms (16 bit CPUs, less memory than a modern applications splash screen). > > The compile/debug cycle complaint is just wrong. > > Javac and Jar are actually really really fast. > > That's not the point; Eclipse has basically removed > the compile cycle by pushing it in the background (I > haven't called javac & friends for years). That was one of the six points that this chapter discussion is about. So it is the point. You bring up Eclipse. The last project I was on which used Eclipse did have an enormously long build/compile cycle (so I can relate to Bruce's pain), but is the source of the problem Java itself? I have tried to show that it isn't the fault of Java, but rather the source of the problem is the tool set (eg Eclipse/Ant). That's not to say that you shouldn't use Eclipse, (or rather: I don't want to get into that, I'll just file it under the category of self inflicted wound and move on) you might choose to use the slow build set because you believe that there are other benefits, whether that is a particular refactoring command you cannot live without, or that you love putting it all back together when it crashes because it is a buggy piece of ... (must not go there, must remain calm, breathe deeply, go to the happy place) ;-) > A problem is that Hotswap (code replacement) is very > limited and will stop working even for trivial > changes (just edit an anonymous class to access some > local var... it'll need changes in the containing > class to add accessors, thus changing class > signature). I'm not sure what you are getting at with the Hotswap thing, I don't understand the relevance to a slow build/debug cycle. > >: > There is a kick butt build tool for Java, it is called javac. Sometimes we also need to use another really good tool called jar. Where things go off track of course is when we need to deploy signed jars, and include W-ars and E-ars and all the other pomp and circumstance which goes with EJB/J2EE. At that point people do start looking around for other tools (like Ant or Maven). But I think that there is a kind of 'gravitational pull' to these tools... because it is easy to add things like automated tests, code coverage, javadoc generation, metadata generation, run some sql scripts to rebuild a test database from scratch... And sometimes they for some reason didn't quite work, so you end up doing a full clean and build each time 'just in case'... And before you know it... you're waiting 30 minutes for a build because you changed one class! That is a true source of pain. Oh, and changing language [b]absolutely will not make that pain go away[/b]. Because if you were trying to do all those other things, it doesn't matter if it is Ruby, Python or COBOL... because it is the *build* part that is the pain, not the *compile* part. To avoid this kind of 'own goal' people just need to ensure that in order to debug, they don't need to do a deploy. If they need to deploy to do a debug, something is wrong with their choice of process. > >). Well, that is an argument about how Java should have been implemented. I understand where you and Bruce are coming from, but I think that boat sailed ten years ago. I also think that if we were using modern vms to reimplement Java from scratch we might well choose to do things differently. You might find that as they make changes to support scripting languages, then a lot of the 'missing' smalltalk stuff might 'sneak in the back door'. But, other than university researcher types (ie the purest of the purists), out in the real world I don't see anyone wailing and gnashing their teeth over the whole primitives thing. Autoboxing looks like it is causing some pain, but that goes more in my category of 'look how badly tiger was stuffed up' ('the love of generics being the root of all evil' and all that jazz). >). And equals vs == >. Spilt milk. And how much space should 1/3 take up anyway? Or e, or pi? > Don't get me wrong: I know why primitives happened > (first VMs with the very simple interpreter plus > simple GC), but that doesn't mean that we should have > to live with that nonsense longer than necessary. Eh. I don't mind it. Even if it means that Java is really actually two different languages, since there are a set of operators for primitives, (eg +/- && || true, false) and a set of operators for objects (new . ()) and some for arrays ([]) which are objects that pretend to be primitives etc. Java is reasonably comact in terms of operators, while still being reasonably expressive, while still being reasonably readable. Sounds to me like they did a great job. > >. Since I'm not involved in those competitions no. I was hoping that someone would pop up with proof either for what I was saying, or against (the web page which I saw the non-static-typing person lamenting the inability of non-static-typed-languages to win is also several years out of date). Frankly the nuances of the differences between Dylan and Haskell and Scheme seem a bit academic to me. I also think that it is not necessarily a strict duality, it is not just 'evil' ie static-typing vs 'good' (whatever the opposite of static typing is). There is static, weak, strong, dynamic... looks like a multi-dimensional thing to me. Looks like the opening volley of the standard 'language holy war'. What makes it interesting though, is that Bruce is of course supposed to be one of the 'faithful true believers' - and he has 'gone over to the other side', as though Patton had said "hey, maybe the soviets* are actually a good idea" *Brief history lesson: soviets good, communists bad (soviets were actually a really good thing, and the communists got into power because they said 'if you like the soviets, vote for us'). But I digress. The point I guess is that Bruce's original assertion that static typing sucks, is just as much chest beating as my assertion that it doesn't. *Except* that I suggest that there is an easy to find long term example where static typing has consistently won out time and time again. If I'm wrong, it should be easy to prove me wrong. Whereas Bruce's assertion doesn't seem to be testable... > >.... No, it shows that if I'm scared of anything, it is the snakeoil salesmen that call themselves architects, who usually have half or less the experience that I do in Java, making decisions about what tools and technologies I am told to use (even though they may know nothing about the tools and don't use them themsleves), and making design decisions about code they never have to write or maintain. If I am scared of anything, it is the people who are looked to to make these calls, but are then not held accountable for the decisions they make. Power without accountability is *always* a dangerous thing. This whole 'you are scared of change' thing is annoying. You see it all the time from the Agile zealots. Anyone that has tried their way of doing things, and didn't like it is 'afraid of change'. How rude is that? And wrong too, if they were really afraid of change they never would have tried agile in the first place. In this case, if someone has tried Ruby and Python and doesn't like them, does that make them 'afraid of change'? Of course not! My personal objections to Ruby were the special characters on the front of variable names - which give me flashbacks to hungarian notation. My objection to Python was based on their use of the underscore as a special character (for constructors or something like that) - which gives me flashbacks to visual basic. But those are subjective things. Some people don't mind having horrible mangled variable names. And I can see that Python has a point about the extra 'fluff' of C/Java syntax ( {}s and ;s ). But they have their own problems, just ask a Python fan about whether spaces or tabs are the proper way to go... >"... You're not getting into the spirit of this whole my language is better than yours flamewar thing. Please practice your posturing and soapbox standing thing some more. > how > about doing something productive? Like: thinking how > the *benefits* of static typing could be added to > dynamic languages? (How about a extension library for > Ruby that allows you to write protocols/interfaces, > or enforce certain class signatures). Or, better yet, taking the good stuff from Rails (eg using reflection to create table classes - sorry Hibernate, you're too slow and ugly, you lose), and subsuming it into our own web frameworks? Some people just don't like Java, they never have, they never will. That is their problem, not ours. As the saying goes you cannot please all of the people all of the time. Perhaps they find that they are 'magically' more productive in another language, because it fits their style better. > I'm not sure what you are getting at with the Hotswap > thing, I don't understand the relevance to a slow > build/debug cycle. OK: you can replace code in running Java Code using Hotswap, which works nicely... except that the changes must not touch the class structure; if they do, the update can't be applied. Ruby (or Smalltalk for that matter) can update even class definitions at runtime and can update object trees with the new methods, data,... This means: fewer restarts (if at all) during development. This is one major annoyance for me when developing (web apps, or Eclipse plugins) because even a minor change in a class (or the code of an anonymous class) will stop Hotswap working. > been implemented. I understand where you and Bruce > are coming from, but I think that boat sailed ten > years ago. Weeeell... the nice bit about JVM language implementations like Jython, Rhino or JRuby is the fact that they can make the right decisions on top of the JVM, while still keeping the available libraries, the powerful GC,... So I think while Smalltalks train might have rusted in the station (to use Bruce's words)... there's now a chance to get higher level languages without major upgrade pain. Sure, Jython (or Rhino,...) have been available for years, but now they've managed to get more air time and space in the mindset to really take off. > purest of the purists), out in the real world I don't > see anyone wailing and gnashing their teeth over the > whole primitives thing. Weeeell... there must have been a reason for MS to add AutoBoxing to C#, which caused Sun to rush and add it too. > > with bit widhts of types; thing is: it wouldn't be > > necessary. > Spilt milk. Well... I mostly annoyed to see, that milk will be spilt again, over and over. Like the fact that NIO Buffers measure their size in int... >And how much space should 1/3 take up anyway? Or e, or pi? That's a matter of precision. Define one that takes the same precision as a double. Make another accessor for PI, where you can add in a precision parameter (PI'd have to be calculated then, if you really need that). > This whole 'you are scared of change' thing is > annoying. You see it all the time from the Agile > zealots. Well, I know that rhetoric too, but mostly from grumpy, bitter old LISPers or Smalltalkers who still can't believe that people didn't flock to their tools, despite the fact that they've been telling them what idiots they were for not using them.......... My "scared shitless" argument doesn't come from that; I get that impression from following many Java-releated blogs, Javalobby, ... And there's just the impression, that devs (at least the more vocal ones) feel like they've been caught off guard by Rails & Co. While the J2EE world has been busy developing Web Framework #58 ("Now with 20% more AOP and less DIY" or whatever), the Rails guys managed to truly make a tool that makes you more productive and takes away the pain of J2EE Web development. (And no; I'm not just talking about Rails' scaffolding support or ActiveRecord; it also facilitates MVC without bragging about it; it comes with tools to set up controllers, data binding,... and other elements without requiring an IDE; it comes with templating and tiles support (without requiring you to evaluate all the 42 Java templating Frameworks); it doesn't need Taglibs, because the language is expressive enough on its own,......). I think many Java devs just feel like they've been caught off guard; while their world was busy building more complexity, someone else managed to achieve the same thing in a simple manner. Oh... it's not that all feel that way; many have recognized the problems before too... the folks who started Spring, or whoever gave the impuls to do EJB3... I also perceive some sense of fear (although those people would probably not admit it) from all the bloggers and poster who like to spread FUD about dynamic languages. I remember a recent JavaPosse newscast, where the hosts were also talking about Beyond Java... but they dismissed dynamic langs by claiming that Java just allows to write more stable code... for I/O for instance... without any rational explanation. Static type checking can do a lot of things... but it won't tell you to catch Runtime Exceptions; it won't prevent you from casting to a wrong type, it won't tell you to close those InputStreams (because you're leaking resources, file descriptors,...). FUD is used by people who are scared (or I should've use the word "threatened" instead of "scared"). Oh... of course, I should clarify: a statement like "the Java world is FOOO", is necessarily an over generalization... There are millions of Java devs and the only statements that hold true for all of them are things like "They have to eat" or "They know what an if statement must look like". > FUD is used by people who are scared (or I should've > use the word "threatened" instead of "scared"). I wouldn't take the words of others as FUD so quickly. What's happening is beyond my comprehension, yes, I have taken a look at Ruby, yes, I did look Rails, and just don't get it. That simple. The world is not CRUD apps, so you realize there's so much more out there that needs to be addressed by a language. It seems more like a stampede. All of a sudden everyone is running in one direction and they don't even know why. :) It's just ridiculous. Every once in a while this happens, XML was the cure for all diseases not so long ago, C# was also "the killer", so many times better than Java as I used to read, now look again. I use also Perl at my job, and used PHP for quite a while(I like Perl the most). The same way as I could be using Python or Ruby, I just don't understand how this would be a threat to Java. The scope of Java (EE, SE and ME), the tons of free or commercial libraries and tools available. The IDEs are a real productivity boost, I can't imagine myself programming without one. I just can't see Perl, nor PHP, nor Ruby, not Python, If the argument were .Net and Visual Studio I'd understand. But this!? Dynamic languages is the word of the day (curiously Ruby is older than Java itself). People are arguing against primitives for reasons that are the stupidest. I think what you call "scared" devs are more like "perplexed devs with huge questions marks above their heads while thinking: Is THIS that is supposed to kill Java!? I don't get it.". Yes, I don't get it. > I wouldn't take the words of others as FUD so > quickly. Well... "FUD" == "Fear, Uncertainty, Doubt". If someone spreads arguments about some topic that induce F, U, and D ... then we can call that FUD. Someone saying "Well, I FOO seems nice, but I think that you will lose your eyesight doing it" without providing conclusive evidence is FUD. > Rails, and just don't get it. That simple. The world > is not CRUD apps, so you realize there's so much more > out there that needs to be addressed by a language. Rails is not about CRUD apps. The scaffolding and ActiveRecord are the things that are the ones, that most people refer to. For me: these are useful, but features like trivial MVC (without having to jump through XML hoops), default templating system, many default extensions (what JSPers would do with Taglibs), no recompilation step, ... are the things that make it very easy to use. I come from having to fret with Struts for 2 years... (again: not database apps)... and *every* little change to the Struts app involved a lot of cursing and staring at StackTraces coming from having to edit tiles.xml, struts-config.xml, whatever.xml to even add a simple new action... Rails just works like a charm for that... it just works, and I didn't even have to download 100 MB of specialized GUI tools for this. Oh... and to use your argument: "The world is not Huge Enterprise projects with 50+ developers". If you do that, use J2EE all the way up and down. Rails, on the other hand, makes it trivial to write a well designed web app; Many people (me included) have found that they were writing an app with Rails... that they would have never thought of writing with J2EE. The good thing: it's not some quick'n dirty solution, but instead can be a properly designed OOP app; considering the fact that Rails fosters MVC, Testing (creating a controller etc in Rails creates test skeletons for it),... it's better at this than many other web framework. > a while this happens, XML was the cure for all > diseases not so long ago, I'm not saying that Rails or Ruby is the solution to everything (neither are the Rails developers, BTW). But the J2EE environement has grown to a certain size and has accumulated a lot of cruft, much of it in the Form of overused XML. A shedding of some of that is quite useful, and actually what happens right now (just look at EJB3, Spring, ...) And Rails is one of the signs of that happening... > be using Python or Ruby, I just don't understand how > this would be a threat to Java. Because Java's big claim to success has always been that it's higher level than C/C++, has higher developer productivity than C/C++, is much easier to learn,... And now suddenly a language (Ruby) has come in to the spotlight ... and it boasts the very same advantages. Now: Ruby has been around for as long as Java; same thing for other dynamic languages. BUT: Rails has now provided a Killer App for Ruby, and has propelled it into the public eye in less than a year. Hype... yes... but you need to remember that Java woudln't be here if it wasn't for Hype. Due to Applets, NetworkComputers, WORA... Java was hyped into every magazine and article in 95/96. And now it's Ruby's time ... if the Ruby community plays it's cards right, then Ruby will have major success in the future. *That's* the reason why *I* think Ruby is different from those other dynamic languages. Smalltalk, LISP, Python,... none of them had major killer apps... they might be just as good as Ruby (better?) ... but it's all about marketing. Hey... XML was no innovation... SGML had been here for decades... yet the marketing campaign in 98 (the fact that W3 made it a standard) made it popular so we're using it all day nowadays... > or commercial libraries and tools available. The IDEs > are a real productivity boost, I can't imagine myself > programming without one. Weeeeell... if it's IDEs you want: or for pure Ruby Not to mention that Rails is productive without an IDE, it already comes with scripts that create controllers, actions,... Sophisticated Deployment also comes in the form of Switchtower: which facilitates multiple deployments including rollbacks in the case of a failure,... > I think what you call "scared" devs are more like > "perplexed devs with huge questions marks above their > heads while thinking: Is THIS that is supposed to > kill Java!? I don't get it.".... Message was edited by: murphee Message was edited by: murphee @murphee I agree 100% with your assessment of Struts. I don't see anyone else disagreeing with you about that. Let us take as given the statement: Struts is horrible. Let us also, notice that "Struts is horrible" is different from "Java is horrible". I find it entirely plausible that Ruby development is much faster than Struts development. However, I think Java *done right* can probably give Ruby on Rails a jolly good run for its money. In software, I think there are two fundamental forces - on the one hand, you have the forces of simplification (the good guys). These people live and breathe Einstein's dictum "make it as simple as possible, and no simpler". On the other side, you have the forces of complication (the ungood guys). These people love to take simple things and make them really complicated. If they had a truthful motto it would be "there is no design so simple that it cannot be made more complicated with an extra layer of indirection". How to tell if you're working with someone who makes things overly complicated: They have a static Handler to a Singleton Factory. And it turns out that there is only one class that fits anyway, but you have to reference it by the string of its class name anyway. In order to use it you would say something like: Handler.getFactory().getInstance("MyConcreteClass"); when you could have just as well said: new MyConcreteClass(); (And no, the factory method won't do anything useful other than hiding the 'bad' and 'evil' new keyword.) An easy way of telling if the framework you are dealing with is Simple or Complex is to look at (a) the doco, and (b) who is trumpeting it and what they stand to gain. If the framework can be described well enough to get up and running after looking at a couple of webpages, it is good and simple (eg RMI, SMTP, JDBC). It doesn't mean they don't have depth, but you can start doing useful stuff almost straight away. A complex framework on the other hand, has multiple hefty tomes describing it, and there is a cloud of consulting companies trumpeting about it - like the buzzing of flies around a week old corpse. Due to the nature of their business, it is difficult to find a consulting company that will actually promote a technology that will make you less dependent on their expertise... So it may be that noone is making money from the 'on Rails' framework. If noone is making money off it, everyone has an incentive to make it easier to use... It's interesting reading this thread after it's been out there for a while. Let me clarify a couple of things. - Re. static typing, I think it's interesting to look at some of the most productive languages of all time, and look at the typing structure. The best systems languages have tended to be statically typed. The most productive applications languages have tended to be dynamically typed.. For similar reasons, dependency injection is easy in dynamic languages, but needs a whole framework in Java. - Primitives were absolutely the right call for Java, but to say they don't matter is wrong. Primitives hit you hardest when you're building frameworks. When the basic atom that you're dealing with is not a single, united concept, it complicates your frameworks, especially when you're trying to do things transparently. Try this. Solve a basic XML emitter with Java.. - Compile/debug/deploy cycle matters, and more than you think. Exploration matters too. Seaside is perhaps the most impressive example of this...experience a problem, open your Smalltalk browser within your web browser, write the few lines of code that fix the bug, hit the back button twice, and reload. You never break stride.. 2). As to shooting authors, I do think it's healthy to be a skeptic. There's a whole lot of crap out there. And this book may or may not be part of the dung heap, but you should draw your own conclusions. Hope this all helps. > - Re. static typing, I think it's interesting to look > at some of the most productive languages of all time, > and look at the typing structure. The best systems > languages have tended to be statically typed. The > most productive applications languages have tended to > be dynamically typed. I rate them thus: the most productive applications languages are VB, COBOL, Java, C and Perl (though Perl may be in a different category as a system admin tool rather than an application language). Where I define 'productive' as 'most stuff produced'. Most of the applications out there are written in VB, COBOL, Java or C. And that is where most of the applications programmers jobs are too (maybe add .net if we are talking about current jobs rather than historical accumulation). I don't know much about COBOL, so if you tell me it is dynamically typed I guess I would have to believe that until it was proven otherwise. VB had the notion of a Variant, which was a data type which could pretend to be any of a dozen other things (numbers/dates/strings all rolled into one) ... so I guess that is kind of dynamic. Java objects can be referenced as any of their super types or interfaces, which can be quite dynamic... But I suspect that is not what you really mean by 'dynamically typed languages'. I would have said they were more in the 'statically typed' camp myself. Perhaps I just don't understand the distinction between statically typed and dynamically typed. Or maybe your definition of 'productive' is enormously different. But, from my point of view, if there is hardly any code written in those languages, I don't see how they can be described as being 'productive'. Please provide some examples of dynamically typed languages that have been as productive as the ones I've named. Maybe you are talking about 'theoretical productivity', ie 'theoretically, if all the VB programmers had been using Lisp instead of VB they would have been much more productive than they actually were'. But then that puts the burden back on you to explain why all these 'theoretically superior' languages are left in the dust by all these 'lowest common denominator' languages. >. So... if I understand you right you are saying that in a dynamic language you can: change this: class Foo { public doFoo() { //...Y... } } to this: class Foo { public doFoo() { //...X... oldFoo(); } public oldFoo() { //...Y... } } ... and you are also saying that cannot be done in Java? Or are you merely asserting that it cannot be done at runtime? Because of course the thing can be done easily when we are writing the code, and is done all the time in Java with inheritance for instance, where you call the super method. Doesn't that boil down to saying that Java is not very good at changing existing classes on the fly? And doesn't that boil down to saying that you know that you will write some code which changes a particular class, but for some reason even though you know that when you write the class, you don't want to subclass it there and then, you want to wait till the last minute? ... Must be a Lambda Calculus kind of thing I guess... > For similar reasons, dependency injection > is easy in dynamic languages, but needs a whole > framework in Java. Dependency Injection is way overblown. But such is the way of patterns... those who sit down and slap a bunch of patterns around as their design phase invariably end up with a horrible design. Fortunately for them they will soon be promoted to architects and not have to deal with the messes they have made. Bitter? Moi? Por quoi? > - Primitives were absolutely the right call for Java, > but to say they don't matter is wrong. Primitives hit > you hardest when you're building frameworks. Not if you can design/code your way out of a paper bag. Evidence of good frameworks despite primitives: Servlets, RMI, Java sockets, Applets etc. Most of the core Java apis are pretty darn good. Most of the 'silver bullet of the month club' ones are horrible. And yet they both use the same language: Java. So obviously, it is not the language that makes the difference. > Try this. Solve a basic XML emitter with Java. Ha! Bad example! Java pojos make superb XML emitters if you code the hierarchy properly. At the end of last year I was working with a chap who liked (and was advocating the use of) JAXB. I'd never worked with JAXB. To see if the claims for it were true, I handed him the XML schema we had lying around, and then grabbed the spec for what XML we wanted our system to spit out. It took him about an hour to create the 200+ JAXB files, whereas in the same time I had (from scratch) hand rolled, tested and integrated into our application a solution using a dozen or so pojo classes. (Where the JAXB might have had an advantage would have been reading the XML - but in that case our needs were very very basic and we could just use a DOM parser and pull the bits we wanted off that - or, if the spec or the schema had changed constantly, but invariably it was easier and faster to modify the hand rolled code than it was to regenerate all the JAXB classes). Another thing which made it hard for our JAXB loving friend was that none of our 'ordinary' programmers could figure out where to even start using the JAXB classes - personally if I was writing a framework like JAXB I'd make sure to make the one class you will work with the most actually stand out from the sea of helper classes. (Like shove all the stuff you won't use directly into a different package or something) >. I had a piece of work once maintaining an app which used C sockets. The people who were using the app were really unhappy because it had suddenly stopped working one day, and another app they used which did something similar had kept working. Turns out they changed the network protocol, so of course the C sockets barfed big chunks. Whereas the VB app just kept working and didn't even notice the difference. That was quite a powerful lesson on the advantage of working with high levels of abstraction. > - Compile/debug/deploy cycle matters, and more than > you think. Exploration matters too. No, I think that exploration is important. That is just one of the reasons that I think deciding up front that you are going to use a whole lot of infrastructure apis like Struts and EJBs is wrong for most small/mid size projects... and *especially* wrong for very large projects. Same thing with patterns. Patterns are solutions to problems, and they have drawbacks. So an architect that sits down and says here is our design, and hands you a bunch of patterns as step one is causing trouble. If they decide what problems they are going to have, and start making compromises before they even explore what the problems are actually going to be... no wonder so many large projects are in trouble! I see lots of Java adds begging for senoir programmers experienced with Struts... and I just think "you fools, if you hadn't used Struts you could have finished by now with much cheaper/less experienced programmers". > Seaside is > perhaps the most impressive example of > this...experience a problem, open your Smalltalk > browser within your web browser, write the few lines > of code that fix the bug, hit the back button twice, > and reload. You never break stride. Eh. Find problem in running Java program. Alt tab to Wordpad. Fix code. Hit X (or Alt Q) in corner to kill Java program. Alt tab to DOS prompt. Hit up arrow twice, then return (to compile). Hit up arrow twice then return (to launch the app again). I count that as 10-12 keystrokes... seriously, it is not that big a deal. If it is a big deal for you, then throw out whichever massively over engineered IDE is the anchor around your neck (and they all have their moments - even (especially) Intellij which is not just over engineered but over hyped too). I mean... who wants to have 80% of the space on their monitor taken up by millions of tiny buttons, menus, views and tabs anyway? Liberate yourself! It is just like a big bra burning - only not quite as interesting to watch perhaps. :D >. I don't think C syntax is really all that special. If nothing else, Python shows us that there is some cruft in that syntax that we could get rid of (not that we could not have learned exactly the same lesson from VB... :) Java does suffer a bit from being two languages, one of primitives and one of objects. * But it is not all bad. Logical operations for instance make a lot more sense when you are dealing with primitives. I've seen examples of how to code up your own if and booleans in Lisp and... it ain't pretty. > 2) I don't annoint a winner in the book. I pick Ruby > to talk about because I didn't want this book to be a > dissertation on language. I wanted the book to be > about Java, and to be a wake up call to the > community. But what it sounds like you are saying is that there are all these fundamental problems with Java - things that can pretty much never be changed (like primitives and dynamic vs static) that are horribly wrong... oh and by the way, it just so happens (ta da!) that there is this language called Ruby which has none of those problems... Why sound a warning call about things that cannot be changed? The horse has bolted, the milk has been spilt, and the barn burned to the ground years ago - no point in calling out the fire brigade now. > As to shooting authors, I do think it's healthy to be > a skeptic. There's a whole lot of crap out there. And > this book may or may not be part of the dung heap, > but you should draw your own conclusions. Thanks for not taking that too badly :D > Hope this all helps. * Java also has its own meta-language. Anyone that regularly writes methods which return hashmaps (and particularly hashmaps of hashmaps**) does not speak this meta-language. Neither does anyone that is overly fond of the static keyword, or handlers or factories and singletons. ** May I recommend to anyone in this category that they apply themselves to mastering Perl, everyone will be happier that way all around. > enormously different. But, from my point of view, if > there is hardly any code written in those languages, > I don't see how they can be described as being > 'productive'. Read your sentence again and think about it. You already stated why those languages are deemed more productive (Hint: less code written...). BTW: where do you get the idea that there's hardly any code written in dynamic languages (see Perl, PHP, Python, Ruby, etc. for use in Web projects). I hate to bring up Rails again, but just looking at the code Java solutions for what Rails does (Hibernate+Struts+Tapestry+Tools for creating the app) shows that. (Take just one of the source of thementioned Java tools ... it's source will be larger than the whole Rails package). >. What you *could* do is use a DynamicProxy to do that, but then you still have to write a the delegation code yourself, not to mention that you can't do that to a live object (ie.you have to give the reference of the DynamicProxy to the user of the class). Not to mention that this gets more difficult/tedious for non-Interface methods (then you'll have to engineer a class with the same methods) or impossible (if the method needs to be of a certain class and that class is final). > Doesn't that boil down to saying that Java is not > very good at changing existing classes on the fly? A) It means that. B) it means that you have to jump through many hoops to do anything like that. Just observe the many efforts to do AspectOriented Programming in Java. There are Source solutions (AspectJ I think, maybe others) or runtime/bytecode modification ones (JBoss I think, others,...). Both solutions are difficult, have problems (the source ones need tools support and require developers to learn the new syntax), the dynamic ones are always a bit fiddly (messing with bytecode always is). With Ruby (or Smalltalk or whatever), it's trivial to simply change the methods of classes and even of individual objects at runtime, using only the Ruby language. For classes: basically, the class definition of a Ruby class is also code that is executed, thus you can run code at "compile time" or "class load time", which can add methods, wrap methods, or whatever you want. Reflection & Co is very easy and less verbose in Ruby, not to mention that you can do all the above mentioned tools. Writing tools like (remote) proxies or anything like that is trivial, as you only need to implement the method_missing methods and use an Object instance. Any non-defined method called on this object will trigger the method_missing method, which can then ... do whatever it wants (forward it to the remote receipient, ...). As I mentioned above, this is similar to the DynamicProxy class in Java, BUT without the problems that I also described above. Not to mention, that this kind of thing can do things that will/can/do improve the maintenance and updating of systems without shutting them down. Hotswap can do things, but not change class definitions. In dynamic languages, this is possible at runtime; if you see that you can actually enumerate all live objects in the heap (not possible in Java at all), you can do this update at once, or using the method_missing tools for incremental update. >. And again: if you don't like this, then you'll probably also don't like anything AOP-like or any bytecode-instrumentation at all. (Talking about that: you could actually do a JFluids/Netbeans-like profiler in Ruby/Smalltalk/... without having to resort to Hotswap/BytecodeInstrumentation... simply using the Ruby language). > >. On your rant with JAXB: You were probably better off with your hand rolled solution, although it depends on the situation. > >? > if > > there is hardly any code written in those > languages, > > I don't see how they can be described as being > > 'productive'. > > Read your sentence again and think about it. You > already stated why those languages are deemed more > productive (Hint: less code written...). No, that was Bruce's definition. I've gone meta. I am stating that productivity = finished apps. Now when I look at my machine at work (windows) and my machine at home (mac), these are the languages used to write the apps which I use: C Visual Basic Java (I eat some of my own dogfood) Objective-C Where are the apps written in Ruby, Smalltalk and Perl? At least if I used Emacs I would be using Lisp - but that is like 1 lousy program out of thousands. On the 'number of apps = productivity' scale the dynamic typed languages are like unto the fart of a mosquito versus a hurricane. Another way of looking at it is that I'm saying that NO lines of (applications programming) code are being written in those languages. I guess in your mind that makes them infinitely productive because there 0 lines of code and productivity = apps writeen / lines of code to write those apps. I'm just pointing out that the numerator is zero too. It is similar to the question of extra-terrestrial life, if we make certain (reasonable sounding) assumptions, then the logical conclusion is that ET should *already* have phoned home from here, ie we should have already encountered the aliens. If dynamic languages are so gosh darned compelling and productive, why is noone using them? And the burden of proof then falls on the people asserting the superiority of dynamic typing. From what I've seen the arguments go like this: (a) most programmers are too stupid to understand dynamic programming (the 'Paul Graham' theory) or (b) ... there is no b ...* Interestingly, as someone else pointed out, it is not Java that is under threat per se, but the scripting languages like PHP - Rails is not a threat to a Java/Swing developer for instance. However - if AJAX really takes off, and Ruby becoems the language of choice for Web 2.0 - then it might start to gain some traction. So to talk about Ruby, particularly Ruby on Rails and 'applications programming productivity' and thereby suggest that Ruby is an applications programming language is rather disingeneous. In this case, P does not imply Q * This has got to be a major source of embarrassment for the fans of dynamic languages. I'd love to see a decent explanation for this. :D > BTW: where do you get the idea that there's hardly > any code written in dynamic languages (see Perl, PHP, > Python, Ruby, etc. for use in Web projects). From a quick unscientific survey of my own experience with computers. I've seen more programs written in Delphi (for crying out loud) than all the dynamic languages put together. They just aren't there, they're not even a blip on the radar. > I hate to bring up Rails again, but just looking at > the code Java solutions for what Rails does > (Hibernate+Struts+Tapestry+Tools for creating the > app) > shows that. You must not have read my post where I agreed with you that Struts is horrible. However Hibernate+Struts is not 'the one true Java way of doing webapps'. There are other choices. I'll say it again, just because Struts is horrible, does not mean that Java is horrible. What Rails does is not that hard to do in Java. There was a post on the Java.Net frontpage the other day where someone said half the solution: use metadata to slap a rowset into a JTable. You can use metadata to generate (at compile time) the classes which go and fetch data from the database. And you can use dynamic class loading to load new ones on demand. It will take more lines of code than the Ruby solution, and it will be more complicated, but then you get a nice front end 'for free'... I wonder if Ruby does that part too? > (Take just one of the source of > thementioned Java tools ... it's source will be > larger than the whole Rails package). Just take it a long way away from me please :D > >. ... You can reload classes at runtime... > With Ruby (or Smalltalk or whatever), it's trivial to > simply change the methods of classes and even of > individual objects at runtime, using only the Ruby > language. Well, see this is the thing that gets me. You still need to write the code to do that. And you need to deploy the code to do that. And that means that you basically have a 'I hand you the code' phase. Now, in Java, that phase includes a compile phase. But the point I'm trying to make is that if you know when you finish writing the code prior to performing that phase that you have some code to modify one of your classes... why not just go the whole hog and actually modify (or subclass) the class at that point? So in that sense, the advantage of being a dynamic language is gone if you ever hand over a 'completed' app. ============ It seems that like AOP (with its logging example) 'dynamic' languages have only one advantage, that being that you can fiddle with things 'on the fly'. This advantage ends as soon as development ends and you hit the 'I hand you the code' phase.** And the usefulness of this advantage is proportional to the amount of time spent debugging. If the bug takes a couple of seconds to find, then the stop/compile/restart portions are significant. On the other hand, if it takes an hour or more to track down the bug, spending an extra 15 seconds on stop/compile/restart is trivial, and hardly worth trumpeting about (much like AOP and logging IMHO). What sort of bugs take mere seconds to fix? Typos. Variable names spelt wrong. ... in fact, the sort of stuff that the compiler would have caught for you if you weren't using a dynamic language ... Sounds like the main advantage of dynamic languages is only useful because of the main disadvantage of those languages. I chalk this one up to 'sure its an own goal, but did you see how fast I kicked it?' ============ ** This is not the end of the story - but only where the developer is of a level of sophistication and familiarity with the product to be able to change it on the fly. It would be nice to 'unbreak' Eclipse for instance when it falls over and dies whilst simultaneously corrupting its configuration information. But where would we find such sophisticated users? Perhaps the only ones would be developers who are also programmers?*** *** Maybe, this points the way to the missing (b) - that dynamic languages are not especially productive for applications, but rather their specific advantages come to the fore when talking about developer tools? After all, the Lisp guys are always bragging about the good old days on Lisp machines and how wonderful the toolset was... But then why can't they make the jump from tools to apps? Is it critical mass? Is it that there are so many of these boutique languages to choose from? Why? Why have dynamic languages failed so massively in the marketplace? > >. But how often does this happen? And aren't we talking about 'the next big thing' surely then we are talking about the choice of languages - and if its a language I chose, surely we are talking about code that I wrote, and that means I do indeed have the source code? > And again: if you don't like this, then you'll > probably also don't like anything AOP-like or any > bytecode-instrumentation at all. Guilty as charged. It is nothing a proper design in the first place would not have fixed. > (Talking about that: > you could actually do a JFluids/Netbeans-like > profiler in Ruby/Smalltalk/... without having to > resort to Hotswap/BytecodeInstrumentation... simply > using the Ruby language). Tools. Not apps. > > >. But messing with strings is dead easy. Pretty much all of programming is just mucking around with some numbers and some strings. > > >. Only important when time to find and fix the bug is very very short. > >? No, the rest of us built new barns, with better fire safety controls. The cost of a couple of extra sprinklers and a fire exit is really not that bad. We moved on, we built a bridge and got over it. Near the end of chapter 4, Tate wonders: ." Tate my call it being conserative, I call it being responsible. As someone with an unusually large VB6 codebase, one major appeal of Java has been Sun's commitment to backward compatibility while still advancing the standard. It's difficult to migrate VB6 code to VB.Net and arguably not worth the trouble. Lets break that down by classification: Problems with the language: * "Compromises", like primitives, which were introduced to lure the C++ crowd * The costs of strong, static typing Problems with the language tools: * A long compile/deploy cycle Problems based on [b]choices that people make[/b] that are unrelated to the language at all. * Frameworks that make life easier for experienced developers, at the expense of approachability for new developers * Over-reliance on XML, caused by Java's inability to represent structured data * Over-reliance on tools to mitigate complexity The compile/debug cycle complaint is just wrong. Javac and Jar are actually really really fast. The problem here is overly complex build and deploys using some automated tool - and we don't have to poke around much to see that the problem is actually Ant, not Java. Using Ant is a choice the developer makes, so that one really belongs in the 'shooting yourself in the foot' or 'cultural' problem category. Lets look at the first two problems: Primitives (and operator overloading for Strings) are, from a language purists point of view, a bad thing. However an alternative way of looking at them is that they are an optimisation of the things that programmers do most. Procedural programming, at a very abstract level is nothing but operations on 1s and 0s - if we get a bit more concrete it is operations on bytes. If we come down another layer of abstration it is simply maths and strings. So is this language level optimisation a good thing or not? Primitives for numbers is a performance optimisation, whereas using + for String concatenation is a 'conceptual' optimisation. But of course we have all heard the phrase: "premature optimisation is the root of all evil". So are these premature optimisations? I would argue that they are clearly not, since they are the things which happen *all the time*. The warning against premature optimisation is not to waste time optimising something that is rarely used and unimportant. For strong static typing, I think that is a matter of taste. But... perhaps in the interest of introducing some objectivity into the discussion we could perhaps look at the functional programming contests, which is where languages with strong typing are pitted against those without, on the same task. And the point I would make is that so far as I am aware, it is the languages with strong typing that always take home the honours. In terms of productivity - strong typing is king. Strong typing has (in those competitions), convincingly laid the smack down. This is not a 'ring out', this is not a victory because someone interferred by throwing a chair into the ring at the right time, this is not won by dirty tactics. It is a series of clear and clean victories where strong typing pins the opposition in the middle of the ring and gets a good count 1, 2, 3. Or to use a baseball analogy, this is a clean sweep 4-0 in the world series. Strong typing wins hands down, it isn't even close. And the fans all complain about how boring it was. :D As for the choices that people make, you cannot legislate against stupidity. But, in most cases, did the people involved deliberately set out to make the worst possible decision? No, probably not. So what is the cause of the problem? Is it not that we have all these pundits and experts (self proclaimed usually), who tout the latest 'silver bullet' solution to all that ails us - whether that is Struts, Agile programming, XML or whatever. (See also: todays Dilbert) People like Bruce run around proclaiming the latest and greatest thing. And sometimes they are even correct (ie that technology X provides a benefit in a very narrow situation). But more often what happens is that we at the coal face get lumped with the grab bag of latest and greatest apis, many of which are not appropriate to the task, and managing them all becomes a problem, so you get apis to manage the apis... ad nauseam. When the revolution comes, the book authors and gurus should be right behind the architects in the line of people that need shooting. > The compile/debug cycle complaint is just wrong. > Javac and Jar are actually really really fast. That's not the point; Eclipse has basically removed the compile cycle by pushing it in the background (I haven't called javac & friends for years). A problem is that Hotswap (code replacement) is very limited and will stop working even for trivial changes (just edit an anonymous class to access some local var... it'll need changes in the containing class to add accessors, thus changing class signature). >: Combine that with Rails tools like Switchtower (which facilitates updating of a Rails app, including rollback in the case of failures). >. Don't get me wrong: I know why primitives happened (first VMs with the very simple interpreter plus simple GC), but that doesn't mean that we should have to live with that nonsense longer than necessary. >"... how about doing something productive? Like: thinking how the *benefits* of static typing could be added to dynamic languages? (How about a extension library for Ruby that allows you to write protocols/interfaces, or enforce certain class signatures). > It's funny to read statements like this... it shows > that the Java world is *really* scared shitless over > competition like Ruby, Python, and other dynamic > languages.... Personally I don't think this is true, history has shown people are afraid of change, in some fashion, regardless of what that change is. Just because someone doesn't agree with anothers assumption doesn't mean that they are anymore correct or incorrect then the other view(s). Personally in my day to day life, I use Python, C, C++, Java, and occasionally others. I have no fear of learning a new langauge or wanting to try something completely random, yet my day job revolves mostly around java related things. Have I seen people in the java world be afraid of change? Yep. But the same can be said for those who live and breath C or C++ or any other language. I do find it interesting that the closest we have in Java to an object literal is XML. Especially interesting in that XML and Java objects are so different from each other (as evidenced by the diversity of mapping tools available). Maybe Java should pick up native support for JSON or something similar but more Java-like. (Rather than the horror-show XML literals they've mused about for Dolphin.) Agreed. I wrote SDL () to allow easy creation of XML like datastructures, but it would be nice to have something in the language. At least with 1.5's varargs you can statically import a list method to construct lists: List Daniel Leuck Message was edited by: dleuck I am tired of this. How many books does he need to sell for us to stop hearing about it. I am going to run to the train station because I don't want to over-rely on my complex car. > I'm not a language geek, languages are a necessary > evil, I have to express the logic somehow and the > language does it Having a better notation to express the logic is why we have children solving algebraic problems in minutes that used to take the best mathematicians in the world hours. The introduction of modern algebraic notation transformed the solving of equations from a problem that only a few experts could solve to a completely mechanical problem we can solve with a computer. > now listening to "Wow! this > language gives me 10x all that much power" leads me > to wonder: > > "How is this language different from the other 2513 > computer languages ever invented so far!?" It means that it's easier to build abstractions in that language and thus it's more like the powerful languages like Lisp and Smalltalk than it is like the less powerful languages like Cobol and FORTRAN. > Buzz, hype, fanaticism, this is the fuel for the Ruby > bubble. True, just as it was for Java. Ruby is another small step away from ALGOL and its decendents and toward Smalltalk and Lisp, but like Java, even though Ruby's not there yet, it's important to make these steps because programmers are conservative so if we're going to get to a language that makes programmer much easier for everyone, we're going to have to do it in small steps like C to C++, C++ to Java, and Java to Ruby.
https://www.java.net/node/645314
CC-MAIN-2014-15
refinedweb
13,815
70.13
SYNOPSIS #include <unistd.h> int lockf(int fildes, int cmd, off_t size); int lockf64(int fildes, int cmd, off64_t size); DESCRIPTION The The following lock commands are available: - F_ULOCK Unlock a previously locked section of the file. - F_LOCK Lock a section for exclusive use. If the section is already locked by another process, wait until it is available, or until lockf()is interrupted by a signal. - F_TLOCK Test a section for another process' locks. If the section is already locked, return a failure rather than blocking until the lock becomes available - F_TEST Test a section for another process' locks. The section to be locked or unlocked starts at the current offset in the file and extends forward for a positive size or backward for a negative size (the preceding bytes up to but not including the current offset). If the specified lock size is 0, the section from the current offset through the largest possible file offset is locked. fails. File locks are released on the first close by the locking process of any file descriptor for the file. The F_ULOCK request may release (wholly or in part) one or more locked sections controlled by the process. Locked sections are unlocked starting at the current file offset through the specified size bytes , or to the end of file if the size is 0. When all of a locked section is not released, the remaining portions of that section are still locked by the process. Releasing the center portion of a locked section causes the remaining locked beginning and end portions to become two separate locked sections. If the request would cause the number of locks in the system to exceed a system-imposed limit, the request fails. A potential for deadlock occurs if the threads of a process controlling a locked section are blocked by accessing another process' locked section. If the system detects that deadlock would occur, The PARAMETERS - fildes Is an open file descriptor. It must have O_WRONLY or O_RDWR permission for a successful locking call. - cmd Is a lock command, as described in the DESCRIPTION section. - size Is the number of contiguous bytes to be locked or unlocked. RETURN VALUES If successful, - EACCES The cmd parameter is F_TLOCK or F_TEST and the section is already locked by another process. - EBADF The fildes parameter is not a valid open descriptor. Or, the cmd parameter is F_LOCK or F_TLOCK and the fildes argument is not a valid file descriptor open for writing. - EDEADLK The cmd parameter is F_LOCK, the lock is blocked by some lock from another process, and putting the calling process to sleep, waiting for that lock to become free would cause a deadlock. - EINTR The cmd parameter is F_LOCK and a signal interrupted the process while it was waiting to complete the lock. - EINVAL The cmd parameter is not one of F_LOCK, F_TLOCK, F_TEST, or F_ULOCK. The size parameter plus the current file offset is less than 0. The fildes parameter refers to a file that does not support locking. - ENOLCK The cmd parameter is F_LOCK, F_TLOCK, or F_ULOCK and the number of locked regions available in the system would be exceeded by the request. - EOVERFLOW The offset of the first, or if size is not 0 then the last byte in the requested section cannot be represented correctly in an object of type off_t. MULTITHREAD SAFETY LEVEL MT-Safe. PORTING ISSUES Standard locks apply only to the local system. Locks are, by default, advisory. The NuTCRACKER Platform also supports a form of mandatory locking on 2012/8.1/2012R2/10/2016/2019. If the S_ISGID access mode bit is set for a file, and S_IXGRP is not set, then the NuTCRACKER Platform enables mandatory locking for the file using Win32 APIs. Mandatory locks are not inherited across The NuTCRACKER Platform also supports an advisory locking mode that uses the native Win32 locking APIs ( AVAILABILITY PTC MKS Toolkit for Professional Developers PTC MKS Toolkit for Professional Developers 64-Bit Edition PTC MKS Toolkit for Enterprise Developers PTC MKS Toolkit for Enterprise Developers 64-Bit Edition SEE ALSO - Functions: _NutConf(), _NutForkExecl(), _NutForkExecle(), _NutForkExeclp(), _NutForkExeclpe(), _NutForkExecv(), _NutForkExecve(), _NutForkExecvp(), _NutForkExecvpe(), chmod(), execl(), execle(), execlp(), execlpe(), execv(), execve(), execvp(), execvpe(), fcntl(), fork() - Miscellaneous: - lf64, struct stat PTC MKS Toolkit 10.3 Documentation Build 39.
https://www.mkssoftware.com/docs/man3/lockf.3.asp
CC-MAIN-2021-39
refinedweb
714
60.24
We always welcome contributions! If you have already registered, you can log in and submit your articles. Current authors ∞ Join the current authors to write systems tutorials on SysTutorials. - Eric Z Ma (338) - Weiwei Jia (8) - David Yang (4) - jameswarner (4) - Joseph Macwan (2) - Aaron Jacobson (1) - Colin Cieloha (1) - Ethan Millar (1) - Johnnymorgan (1) - Thirumal Venkat (1) - Jesse (1) Again, contributions are always welcome. Register an account ∞ Please first register an account. After registration, you can log in and submit your articles. If you wish to write one or more articles, it is also a good idea to contact us to discuss your interests. Introduce yourself ∞ As a small site, what we can only offer for your contribution is link love and credits at current. After the registration, please go to your profile and change the “Display name publicly” and “Biographical Info”. The “Display name publicly” will be displayed as the author name. “Biographical Info” will be the short bio after your posts. It can contain links to your website/blog/homepage for link love. We understand that you want some “dofollow” links. Please add rel="follow" to your links in your bio. But please keep the number of dofollow link within 2. Formatting the posts ∞ We have 3 kinds of syntax supported to edit/format your post: - HTML. The default without any specific configurations. - Markdown. Supported with special marks. Check “Use Markdown to write/format posts” for how to write posts in Markdown. BlogText. Supported while disabled by default. Check “Use BlogText to write/format posts” for how to write posts in BlogText. Use Markdown to write/format posts ∞ Mardown is supported to format your post. You can simply add [md] and [/md] between your text. For example, Welcome to [ST]()! ``` echo “hello world!” ``` [/md] will be rendered as echo "hello world!" Use BlogText to write/format posts ∞ You may choose to use the BlogText which is a wiki-like syntax to write posts and pages. This syntax is easy-to-learn and fast-to-type. To enable BlogText, select the “Use BlogText for this post/page” in the “BlogText” metabox in the post editing page. These two following links may help you quickly get familiar with it. BlogText Syntax Description. Some notes about post editing in BlogText format are as follows. Code snippets ∞ One common issue about post editing here is about code snippets inside of the post content. - You do not need to escape your code snippet having characters like <, >if the code is put inside of {{{and }}}. - You can just copy and paste the code plain text to the block between {{{and }}}and you code will look just like what it looks in your text editor and the program will automatically escape it for you when the post is displayed. If you are writing plain text (not inside of {{{and }}}), you still need to escape your code snippet. For example, to write a line of #include <stdio.h>, you can just write {{{ #include <stdio.h> }}} You do not need to worry about the < or translate it into HTML code manually. Hope this feature makes writing with code snippets easier. Table of Content ∞ You can add a “table of content” by adding [[[toc]]] at the text where you would like the table of content to appear. If you would like to force the table of content to take the full width, you can add a piece of CSS before it: <style type="text/css">.toc {max-width:100%; float: none; margin-left: 0;} </style> [[[toc]]]
https://www.systutorials.com/contribute/
CC-MAIN-2017-17
refinedweb
589
75
Strings Note The API reference links in this article will take you to MSDN. The docs.microsoft.com API reference is not complete. The string type represents immutable text as a sequence of Unicode characters. string is an alias for System.String in the .NET Framework. Remarks. Important The \DDD escape sequence is decimal notation, not octal notation like in most other languages. Therefore, digits 8 and 9 are valid, and a sequence of \032 represents a space (U+0020), whereas that same code point in octal notation would be \040. Note Being constrained to a range of 0 - 255 (0xFF), the \DDD and \x escape sequences are effectively the ISO-8859-1 character set, since that matches the first 256 Unicode code points.. // Using a verbatim string let xmlFragment1 = @"<book author=""Milton, John"" title=""Paradise Lost"">" // Using a triple-quoted string let""" In code, strings that have line breaks are accepted and the line breaks are interpreted literally as newlines, unless a backslash character is the last character before the line break. Leading white space on the next line is ignored when the backslash character is used. The following code produces a string str1 that has value "abc\ndef" and a string str2 that has value "abcdef". let str1 = "abc def" let str2 = "abc\ def" You can access individual characters in a string by using array-like syntax, as follows. printfn "%c" str1.[1] The output is b. Or you can extract substrings by using array slice syntax, as shown in the following code. printfn "%s" (str1.[0..2]) printfn "%s" (str2.[3..5]) The output is as follows. abc def. // "abc" interpreted as a Unicode string. let str1 : string = "abc" // "abc" interpreted as an ASCII byte array. let bytearray : byte[] = "abc"B String Operators There are two ways to concatenate strings: by using the + operator or by using the ^ operator. The + operator maintains compatibility with the .NET Framework string handling features. The following example illustrates string concatenation. let string1 = "Hello, " + "world" String Class Because the string type in F# is actually a .NET Framework System.String type, all the System.String members are available. This includes the + operator, which is used to concatenate strings, the Length property, and the Chars property, which returns the string as an array of Unicode characters. For more information about strings, see System.String. By using the Chars property of System.String, you can access the individual characters in a string by specifying an index, as is shown in the following code. let printChar (str : string) (index : int) = printfn "First character: %c" (str.Chars(index)) String Module Additional functionality for string handling is included in the String module in the FSharp.Core namespace. For more information, see Core.String Module. See also Feedback
https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/strings
CC-MAIN-2019-47
refinedweb
460
67.65
Chapter 3 - Natural Gas Worldwide, total natural gas consumption increases by an average of 1.6 percent per year in the IEO2009 reference case, from 104 trillion cubic feet in 2006 to 153 trillion cubic feet in 2030 (Figure 33). With world oil prices assumed to return to previous high levels after 2012 and remain high through the end of the projection, consumers opt for the comparatively less expensive natural gas for their energy needs whenever possible. In addition, because natural gas produces less carbon dioxide when it is burned than does either coal or petroleum, governments implementing national or regional plans to reduce greenhouse gas emissions may encourage its use to displace other fossil fuels. Natural gas remains a key energy source for industrial sector uses and electricity generation throughout the projection. The industrial sector currently consumes more natural gas than any other end-use sector and is expected to continue that trend through 2030, when 40 percent of world natural gas consumption is projected to be used for industrial purposes. In particular, new petrochemical plants are expected to rely increasingly on natural gas as a feedstockparticularly in the Middle East, where major oil producers, working to maximize revenues from oil exports, turn to natural gas for domestic uses. In the electric power sector, natural gas is an attractive choice for new generating plants because of its relative fuel efficiency and low carbon dioxide intensity. Electricity generation accounts for 35 percent of the worlds total natural gas consumption in 2030, up from 32 percent in 2006. In 2006, OECD member countries consumed 52 trillion cubic feet of natural gas and non-OECD countries consumed 53 trillion cubic feet, surpassing OECD gas consumption for the first time since the fall of the Soviet Union in 1991. In the IEO2009 reference case, natural gas consumption in the non-OECD countries grows more than twice as fast as consumption in the OECD countries, with 2.2 percent average annual growth from 2006 to 2030 for non-OECD countries, compared with an average of 0.9 percent for the OECD countries. The non-OECD countries account for 74 percent of the total world increment in natural gas consumption over the projection period, and the non-OECD share of total world natural gas consumption increases from 50 percent in 2006 to 58 percent in 2030. The OECD countries accounted for 38 percent of the worlds total natural gas production and 50 percent of natural gas consumption in 2006, making them dependent on imports from non-OECD sources for 25 percent of their total consumption. In 2030, the OECD countries account for 31 percent of production and 42 percent of consumption, with their dependence on non-OECD natural gas only slightly higher than in 2006, at 27 percent. In the non-OECD regions, net exports grow more slowly than total production. In 2030, 17 percent of non-OECD production is consumed in OECD countries, down from 19 percent in 2006. World Natural Gas Demand OECD Countries In the IEO2009 reference case, natural gas consumption in North America increases by an average of 0.8 percent per year from 2006 to 2030 (Figure 34). In the United Statesthe worlds largest natural gas consumerconsumption in most of the end-use sectors increases slowly through 2030. Natural gas consumption in the U.S. electric power sector, however, increases rapidly from 2006 through 2025 in response to generators concerns about the potential for new legislation limiting greenhouse gas emissions. Those concerns are addressed in the reference case by the addition of a risk premium on new carbon-intensive coal-fired generating capacity, which stimulates investment in less carbon-intensive natural-gas-fired capacity. In addition, the capital costs for new natural gas power plants are lower than those for nuclear and renewable alternatives. After 2025, the growth in U.S. natural gas consumption for electricity generation is slowed by rising natural gas prices, growing generation from renewables, and the introduction of clean coal-fired capacity. As a result, natural-gas-fired electricity generation in 2030 is 94 percent of the 2025 peak level. With the other end-use sectors showing slow but steady growth in consumption, total U.S. demand for natural gas in 2030 is 2.7 trillion cubic feet above the 2006 total of 21.7 trillion cubic feet.15 Canadas total natural gas consumption increases steadily, by 1.5 percent per year, in the reference case, from 3.3 trillion cubic feet in 2006 to 4.7 trillion cubic feet in 2030. The strongest growth is in the industrial sector, averaging 1.8 percent per year, and in the electric power sector, averaging 1.3 percent per year. The rapid growth projected for Canadas industrial natural gas consumption is based in large part on the expectation that purchased natural gas will be consumed in increasing quantities for mining of the countrys oil sands deposits. In 2006, an estimated 12 percent of Canadas total natural gas consumption was used for oil sands production; in 2030, that share could reach 22 percent of the countrys total gas use.16 In Mexico, more than 90 percent of natural gas consumption occurs in the industrial and electricity generation sectors combined. Although growth is projected in all sectors, the share of total consumption accounted for by the countrys industrial and electric power sectors continues to increase through 2030. The strongest growth is projected for the electricity generation sector, at an average annual rate of 4.1 percent, with consumption increasing almost threefold from 2006 to 2030, while natural gas use in the industrial sector grows by 1.8 percent per year. In 2006, the amount of natural gas consumed for electricity generation in Mexico was about one-half the amount consumed in the industrial sector; in 2030, it is expected to be nearly equal to consumption in the industrial sector. Natural gas consumption in OECD Europe grows by a modest 1.0 percent per year on average, from 19.2 trillion cubic feet in 2006 to 21.5 trillion cubic feet in 2015 and 24.1 trillion cubic feet in 2030mostly as a result of increasing use for electricity generation. Many nations in OECD Europe have made commitments to reduce carbon dioxide emissions, bolstering the incentive for governments to encourage natural gas use in place of other fossil fuels. In addition, given the long lead times and high costs associated with constructing new nuclear capacity, as well as the expected retirement of some existing nuclear facilities, natural gas and renewable energy sources become the fuels of choice for new generating capacity. In the IEO2009 reference case, natural gas is the second fastest-growing source of energy for electricity generation in the region, at 2.0 percent per year, as compared with renewables at 3.4 percent per year. Natural gas use in the regions electric power sector increases from 5.8 trillion cubic feet in 2006 to 7.7 trillion cubic feet in 2015 and 9.3 trillion cubic feet in 2030. Natural gas consumption in OECD Asia grows on average by 1.0 percent per year from 2006 to 2030. Japan, South Korea, and Australia/New Zealand are projected to add less than 1 trillion cubic feet of natural gas demand each between 2006 and 2030 (Figure 35). Total natural gas consumption for the region as a whole increases from 5.5 trillion cubic feet in 2006 to 7.0 trillion cubic feet in 2030. In Japan, the electric power sector is projected to remain the main consumer of natural gas, accounting for 64 percent of the countrys total natural gas consumption in 2030, up from 59 percent in 2006. In Australia/New Zealand, the industrial sector accounted for the largest share of natural gas use in 2006, at 56 percent of the total; in 2030, its share falls to 50 percent. Over the same period, the electric power sector share increases from 28 percent to 35 percent. South Koreas natural gas use is concentrated in the electric power and residential sectors, with each accounting for approximately one-third of the countrys total natural gas consumption in 2006; however, the electric power sector share is projected to grow to 47 percent in 2030. Non-OECD Countries Russia is second only to the United States in total natural gas consumption, with demand totaling 16.6 trillion cubic feet in 2006 and representing 55 percent of Russias total energy consumption. In the IEO2009 reference case, natural gas consumption in Russia grows by 0.9 percent per year on average, and its share of total energy consumption increases to 56 percent in 2030, outpacing growth in liquid fuels and coal consumption. Throughout the projection, the industrial and electric power sectors each account for around one-third of total natural gas consumption in Russia, about the same as in 2006. Natural gas consumption in the other countries of non-OECD Europe and Eurasia grows at an average annual rate of 1.3 percent, from 8.8 trillion cubic feet in 2006 to 12.0 trillion cubic feet in 2030 (Figure 36). In Turkmenistan, domestic consumers have received natural gas for free since 1993. Not surprisingly, then, Turkmenistan has had the fastest consumption growth in the region, averaging 16.1 percent annually from 2000 to 2006, as compared with 6.3 percent per year for the rest of Central Asia and Azerbaijan over the same period, and 0.1 percent per year for the rest of non-OECD Europe and Eurasia, excluding Russia. Outside Central Asia and Azerbaijan, most of the rest of the region relies on imports of natural gas from Russia to meet significant portions of their demand, and they have seen natural gas prices rise as Russia has endeavored to bring most of its export prices up to the levels paid by importing countries in OECD Europe. Non-OECD Asia, which accounted for 9 percent of the worlds total consumption of natural gas in 2006, shows the most rapid growth in natural gas use in the reference case and accounts for 31 percent of the total increase in world natural gas consumption from 2006 to 2030. Natural gas consumption in non-OECD Asia increases from 9.4 trillion cubic feet in 2006 to 24.5 trillion cubic feet in 2030, expanding by 4.1 percent per year on average over the projection period (Figure 37). In both China and India, natural gas currently is a minor fuel in the overall energy mix, representing only 3 percent and 8 percent, respectively, of total primary energy consumption in 2006. In the IEO2009 reference case, natural gas consumption rises rapidly in both countries, growing by 5.2 percent per year in China and 4.2 percent per year in India, on average from 2006 to 2030. In the rest of the non-OECD Asia countries, natural gas already is a prominent fuel in the energy mix, representing 23 percent of total primary energy consumption in 2006. Their combined annual consumption of natural gas increases more slowly than in either China or India, averaging 3.6-percent growth per year. With consumption starting from a much larger base, however, the rest of non-OECD Asia adds more natural gas consumption over the projection period than do China and India combined. Together, China and India are projected to consume 7.1 trillion cubic feet more natural gas in 2030 than in 2006, as compared with an increase of 8.1 trillion cubic feet for the rest of non-OECD Asia. Natural gas consumption grows at average annual rates of 2.0 percent in the Middle East and 3.2 percent in Africa from 2006 to 2030. There is very little infrastructure on the continent for intraregional trade of natural gas, and Algeria, Nigeria, Egypt, and Libyathe major African producersalso are the major consumers. The four countries plus South Africa and Tunisia, accounted for 94 percent of Africas natural gas consumption in 2006. Intraregional infrastructure also is limited in the Middle East, although both Dubai (in the United Arab Emirates) and Kuwait have plans to begin importing LNG to meet peak summer demands for natural gas [1]. In Central and South America, natural gas is the fastest-growing energy source in the reference case, with demand increasing on average by 2.4 percent per year, from 4.5 trillion cubic feet in 2006 to 8.1 trillion cubic feet in 2030. For Brazil, the regions largest economy, natural gas consumption more than doublesfrom 0.7 trillion cubic feet in 2006 to 1.8 trillion cubic feet in 2030. Several countries in the region are particularly intent on increasing the penetration of natural gas for power generation, in order to diversify electricity fuel mixes that currently are heavily reliant on hydropower (and thus vulnerable to drought) and reduce the use of more expensive oil-fired generation often used to supplement electricity supply. Although pipeline infrastructure is in place to move natural gas from Argentina to Brazil, Chile, and Uruguay and from Bolivia to Argentina and Brazil, recent concerns about the security of supply have spurred development of LNG regasification terminals in the importing nations. Specifically, Argentina became the regions first LNG importer in May 2008; Chile has plans to add two LNG regasification plants by 2010; a single terminal has been proposed for Uruguay; and Brazil plans to open three LNG terminals in the next several years [2]. World Natural Gas Production In order to meet the demand growth projected in the IEO2009 reference case, the worlds natural gas producers will need to increase supplies by 48 trillion cubic feet between 2006 and 2030. Much of the increase in supply is expected to come from non-OECD countries, which in the reference case account for 84 percent of the total increase in world natural gas production from 2006 to 2030. Non-OECD natural gas production grows by an average 2.1 percent per year in the reference case, from 65 trillion cubic feet in 2006 to 106 trillion cubic feet in 2030 (Table 5), while OECD production grows by only 0.8 percent per year, from 40 trillion cubic feet to 47 trillion cubic feet. With more than 40 percent of the worlds proved natural gas reserves, the Middle East accounts for the largest increase in regional natural gas production from 2006 to 2030 in the reference case and more than one-fifth of the total increment in world natural gas production. Currently, there are four major natural gas producers in the Middle East: Iran, Saudi Arabia, Qatar, and the United Arab Emirates, which together accounted for 83 percent of the natural gas produced in the Middle East in 2006. Each of the four countries has announced plans to expand natural gas production in order to meet the expected increase in regional demand and/or to supply markets outside the region. In Saudi Arabia there has been a concerted effort to increase natural gas production specifically for domestic consumption. At present, Saudi Arabia produces most of its natural gas from associated oil and natural gas fields; however, there may be fluctuations in oil production when Saudi Arabia balances global supply and demand, which also will affect the production of natural gas. To reduce the dependence of its natural gas production on oil production, Saudi Arabia has begun efforts to increase production from nonassociated natural gas fields. To that end, in 2003 private investment for natural gas exploration projects was invited at four sites in the Rub al-Khali desert [3]. Although 27 exploration wells are to be drilled at the sites by the end of 2009, results have not been encouraging thus far, and relatively low fixed prices set by Saudi Arabia for the natural gas have made the projects less attractive to foreign participants [4]. The Saudi national oil company, Saudi Aramco, on the other hand, has made several nonassociated natural gas finds near existing oil fields, some of which are expected to begin producing in the near term, including the Karan natural gas project, scheduled to begin producing 1.8 billion cubic feet per day in 2012. Iran has the worlds second-largest reserves of natural gas, after Russia, and currently is the Middle Easts largest natural gas producer. Political barriersincluding U.S. sanctions and international concerns about the countrys nuclear power ambitionshave lowered interest in foreign direct investment in the countrys natural gas sector. The largest natural gas development project in Iran is the offshore South Pars field, discovered in 1990, which is estimated to contain between 350 and 490 trillion cubic feet of natural gas reserves [5]. Located 62 miles offshore, South Pars has a 28-phase development plan spanning 20 years, with each phase set to produce more than 1 billion cubic feet per day. Iran has set a goal to raise marketed natural gas production to between 9 and 10 trillion cubic feet per year by 2010, more than double its 2006 marketed production of 4.4 trillion cubic feet. That goal may be difficult to achieve, however, without attracting substantial foreign investment in the near term. The worlds second-largest regional increase in natural gas production is expected in non-OECD Europe and Eurasia, which includes Russia. In the reference case, natural gas production in non-OECD Europe and Eurasia increases from 30.0 trillion cubic feet in 2006 to 40.3 trillion cubic feet in 2030. Russia remains the regions most important natural gas producer, providing the single largest increment in production, from 23.2 trillion cubic feet in 2006 to 31.3 trillion cubic feet in 2030. Russias Yamal Peninsula in northwestern Siberia has ample natural gas resources and should provide a major increase in Russian production over the long term. In 2008, state-owned Gazprom began construction of a trunk pipeline to connect Bovanenkovo field, the largest on the Yamal peninsula, to existing pipeline infrastructure. Also in 2008, Gazprom drilled the first production well in the Bovanenkovo field [6]. Gazprom intends to increase production from the Yamal peninsula to 12.7 trillion cubic feet by 2030, both to meet domestic demand for natural gas and to double the size of its exports from current levels. Developing new sources of natural gas is a priority for Gazprom, given that production at its three largest fields (Yamburg, Urengoy, and Medvezhye) is in decline [7]. There is concern that the global economic recession may reduce both domestic and export demand for natural gas in the short run and dampen investment in Russias natural gas sector. In the IEO2009 reference case, however, investment delays are not expected to hinder the growth of Russian supplies. Two other major natural gas projects also are underway in Russia: one to develop the resources around Sakhalin Island on the countrys east coast and another to develop the Shtokman field, off its western Arctic coast. The Sakhalin-1 project began supplying modest amounts of natural gas to domestic consumers in 2007. Production volumes from the first development phase are limited, however, until all the parties involved can agree on how the natural gas should be exported. Production from the second development phase will be exported as LNG, beginning in the first half of 2009, with supplies from the Sakhalin-2 LNG facility expected to reach its total capacity of 9.6 million metric tons in 2010 [8]. The Shtokman natural gas and condensate field in the Barents Sea is officially scheduled to begin producing 840 billion cubic feet of natural gas in 2013 (shipped via pipeline), with additional supplies for LNG anticipated beginning in 2014 [9]. That schedule may, however, prove to be overly ambitious. Substantial growth in natural gas production also is projected for Africa, increasing from 6.6 trillion cubic feet in 2006 to 9.6 trillion cubic feet in 2015 and 13.9 trillion cubic feet in 2030. Currently, more than 85 percent of Africas natural gas is produced in Algeria, Egypt, and Nigeria, which together accounted for 81 percent of Africas proved natural gas reserves as of January 1, 2009, with a combined total of 402 trillion cubic feet [10]. Nigeria has the most attractive geology for natural gas exploration and development and, in terms of reserves, the greatest potential to increase production. With a slightly larger quantity of proved reserves than Algeria, Nigeria produced only about one-third the amount of natural gas produced by Algeria in 20006. Security concerns and uncertainty over access terms are expected to inhibit resource development in Nigeria, however, and its contribution to the expected increase in Africas natural gas production is more modest than its reserves and geology would imply. The rest of the production increase is spread over a number of countries, including Algeria, Egypt, Libya and Angola. In the IEO2009 reference case, non-OECD Asias natural gas production increases by 8.8 trillion cubic feet from 2006 to 2030, with 2.2 trillion cubic feet of the increment coming from China, 1.3 trillion cubic feet from India, and 5.3 trillion cubic feet from the rest of non-OECD Asia. The strongest growth in natural gas production in recent years has come from China, with increases averaging 13.6 percent per year from 2000 to 2006. China is poised to become the regions largest natural gas producer, as production has declined in recent years in Indonesia and the increases in Chinas production have outpaced those from the regions other major producers, Malaysia, Pakistan, and India. Natural gas production from the OECD nations increases by 7.8 trillion cubic feet from 2006 to 2030 in the reference case. The largest regional increases are projected for the United States, at 5.3 trillion cubic feet, and Australia/New Zealand, at 2.8 trillion cubic feet. The projected production increases for the two regions are offset in part by production declines in Canada and OECD Europe, where existing conventional natural gas fields are in decline. From 2006 to 2030, total U.S. natural gas production per year increases by more than 5 trillion cubic feet, even as onshore lower 48 conventional production (from smaller and deeper deposits) continues to taper off. Unconventional natural gas is the largest contributor to the growth in U.S. production, as rising prices and improvements in drilling technology provide the economic incentives necessary for exploitation of more costly resources. Unconventional natural gas production increases from 47 percent of the U.S. total in 2006 to 56 percent in 2030. Natural gas in tight sand formations is the largest source of unconventional production, accounting for 30 percent of total U.S. production in 2030, and production from shale formations is the fastest-growing source, with an assumed 267 trillion cubic feet of undiscovered technically recoverable resources. Production of natural gas from shales increases from 1.1 trillion cubic feet in 2006 to 4.2 trillion cubic feet, or 18 percent of total U.S. production, in 2030. The expected growth in natural gas production from shales is far from certain, however, and continued exploration is needed to provide additional information on the resource potential. Natural gas production in Australia/New Zealand grows from 1.7 trillion cubic feet in 2006 to 4.4 trillion cubic feet in 2030 in the reference case, at an average rate of 4.2 percent per yearthe strongest growth in natural gas production among the OECD countries. In 2006, Australias production was far larger than New Zealands, at 1.5 trillion cubic feet and 0.1 trillion cubic feet, respectively. Australia continues to dominate production in the region throughout the projection, given its large resource base and plans for expanding production of natural gas both for domestic use and for export. The Carnarvon Basinlocated off the Northwest shelf in Western Australiais one of the countrys most important natural gas producing areas, holding an estimated 62 trillion cubic feet of probable reserves. In addition, new development in the deepwater Timor Sea at Browse Basin is expected to bring even more natural gas to market in the future [11]. There also has been considerable interest in developing Australias coalbed methane resources, especially as a fuel for LNG production. Five projects to produce coalbed methane for conversion to LNG currently are planned or under development in Australia, with LNG production from the first project (the 1.5 million metric ton Fishermans Landing project in Queensland) scheduled to begin in late 2012 [12]. Natural Gas Import Dependence OECD Countries OECD North America is largely a self-contained market for natural gas. Although North America imported 631 billion cubic feet of natural gas from other regions in 2006 through six LNG regasification terminals, including one in Mexico and five in the United States, those imports accounted for only 2 percent of its total natural gas consumption. Three new regasification terminals became operational during 2007 and 2008, including the first on North Americas Pacific Coast; and six more were being commissioned or were under construction at the beginning of 2009, including the first regasification terminal in Canada. With North Americas reliance on imports of natural gas projected to grow somewhat in the short to mid-term (Figure 38), imports rise to 6 percent as a share of total natural gas consumption in the region before falling back to 4 percent in 2030. An expected decline in U.S. demand for imports in the later years of the projection is the result of an increase in domestic production from unconventional sources and improvements in clean coal technology that allow for increased generation from coal-fired power plants, reducing demand for natural gas in the power sector. Consequently, U.S. dependence on natural gas imports declines from 17 percent in 2006 to 3 percent in 2030, as Canadas production and exports decline, and as domestic production from shale and other unconventional sources increases. Mexico, on the other hand, becomes more dependent on imports through most of the projection period, as production and investment in its natural gas sector fail to keep up with consumption growth. The shortfall in Mexicos domestic natural gas supply is expected to be balanced by pipeline imports from the United States and imports of LNG. The dependence of OECD Europe on imported natural gas continues to increase in the reference case, as demand grows modestly and indigenous natural gas production declines. In 2006, 44 percent of OECD Europes total natural gas demand was met with imports from outside the region. Imports from two countries, Russia and Algeria, accounted for more than 30 percent of the regions total consumption. In 2030, net imports make up 57 percent of total natural gas consumption. OECD Europes import dependence is an area of concern, particularly because natural gas exporters have signed several cooperation agreements (see "Gas Exporting Countries Forum: What is GECF and What is the Objective?), and parts of the region have experienced supply disruptions during three of the past four winters. In January 2006, Russias Gazprom cut natural gas supplies to Ukraine. Natural gas prices, pipeline transit fees, and debts owed by Ukraine all were at issue. The conflict was resolved three days later [13]. In January 2008, Turkmenistan cut natural gas exports to Iran, and Iran reacted by cutting exports to Turkey to make up for the lost imports from Turkmenistan. In turn, Turkey cut its exports of natural gas (originally imported from Azerbaijan) to Greece to make up for the lost imports from Iran. Subsequently, Gazprom increased its exports of natural gas to Turkey. More recently, in January 2009, another dispute with Ukraine again led Russia to curtail natural gas exports to Ukraine [14]. The basic issues were the same as in 2006: natural gas prices, pipeline transit fees, and debts owed by Ukraine. In this instance, however, rather than lasting three days, the dispute lasted almost three weeks. On January 1, Russia reduced natural gas deliveries to the Ukrainian border, but some gas continued to flow across Ukraine to downstream customers. On January 7, all natural gas exports via Ukraine stopped, as Russia and Ukraine blamed each other for shutting down the pipelines. Natural gas flows were not resumed until January 20, when Russia and Ukraine finally reached an agreement on prices and pipeline transit fees [15]. In OECD Asia, Japan and South Korea continue to be almost entirely dependent on LNG imports for natural gas supplies. The two countries continue to be major players in LNG markets (with Japan representing 41 percent of global LNG imports in 2006 and South Korea 15 percent) despite consuming relatively small amounts of natural gas on a global scale (representing 3 and 1 percent, respectively, of world consumption in 2006). South Korea could begin receiving natural gas supplies by pipeline from Russia sometime after 2015, but Japan and South Korea are expected to remain influential in LNG markets even as growth in global production of LNG outpaces their import demand. Much of the growth in Australias natural gas production is expected to support planned or proposed LNG export projects, although it is possible that some projects and the related production increases could be delayed. Pluto LNG, currently under construction in Australia, is one of the few natural gas liquefaction projects for which a final investment decision has been made in the past few years [16]. Rising costs for liquefaction projects have led many companies around the world to delay project commitments, and decisions on other projects could be delayed as a result of the current global financial crisis and the impending global oversupply of LNG. Projects in Australia face additional hurdles, including a Western Australia policy that requires new export projects to reserve 15 percent of production for domestic use. Also, LNG liquefaction plants are significant contributors to Australias carbon dioxide emissions, and new obligations under Australias Carbon Pollution Reduction Scheme, enacted in December 2008 (and to commence in 2010), may make some liquefaction projects uneconomical [17]. In the near term, Russias net exports of natural gas as a percentage of production are projected to decline, as the global economic slowdown affects demand in Europe and, in turn, Russias pipeline exports to European countries. In the longer-term, the reference case assumes that the necessary investments will be made to develop Russias vast natural gas resources, allowing it to continue supplying increasing volumes of natural gas to its neighbors. Exports, which represented 28 percent of Russias natural gas production in 2006, are projected to fall to 26 percent in 2010 before growing to more than 30 percent in 2030. Production of natural gas in Russia grows by 1.3 percent per year on average in the IEO2009 reference case, from 23 trillion cubic feet in 2006 to 31 trillion cubic feet in 2030. Natural gas production in the Middle East and in Africa is expected to become oriented more toward exports as the Medgaz pipeline from Algeria to Spain comes on line and new liquefaction capacity comes on line in Qatar, Algeria, Yemen, and Angola. Both the Middle East and Africa are projected to increase production by more than 40 percent from 2006 to 2015. In the Middle East, net exports as a share of total natural gas production grow from 14 percent in 2006 to 24 percent in 2015. In Africa, exports grow from 55 percent of production in 2006 to 57 percent in 2010, before falling back to 56 percent in 2015. After 2015, the pace of export developments in the two regions slows, and with their domestic demand continuing to grow, the rate of increase in the export share of production in the Middle East slows, while the export share of Africas natural gas production declines. Indias dependence on imported LNG is projected to be reduced in the short term, when new natural gas production from the Krishna Godavari Basin comes on line. Accordingly, the import share of Indias natural gas consumption falls from 20 percent in 2006 to 13 percent in 2010 (Figure 39). Much of Indias current production, however, comes from more mature natural gas fields that are beginning to decline, and in 2030 India is projected to be dependent on imports for more than 30 percent of its total natural gas consumption. Pipelines to bring natural gas from Iran, Central Asia, or Myanmar have been discussed in the past, but to date no firm agreements have been reached. Chinas dependence on natural gas imports grows throughout the projection period. Although new supplies from Sichuan province are expected to come on line in the short term, and the countrys total domestic production of natural gas increases by 3.1 percent per year on average from 2006 to 2030 in the reference case, production growth cannot keep up with demand growth. In 2030, China could be dependent on imports for more than one-third of its total natural gas consumption. To help meet its growing need for imports, China opened its first LNG regasification facility in 2006 at Guangdong [18]. Shanghai LNG was to be the second regasification terminal in China, with startup in early 2009; however, a fatal accident at the facility during pipeline testing has delayed its startup. Instead, Fujian LNG, which is expected to begin operation in mid-2009, will be Chinas second LNG receiving terminal [19]. Additionally, the first imports of natural gas into China by pipeline are expected by 2011, when a new pipeline from Turkmenistan via Kazakhstan is to be inaugurated [20]. In 2006, the rest of non-OECD Asia (excluding China and India) was a net exporter of natural gas. Three countriesIndonesia, Malaysia, and Bruneicurrently have LNG export facilities. There also have been several proposals made to build LNG liquefaction facilities in Papua New Guinea. Although Indonesias LNG exports peaked in 1999 at about 30 million metric tons (1.4 trillion cubic feet of natural gas) and had declined to about 23 million metric tons (1.1 trillion cubic feet of natural gas) in 2006, a new liquefaction facility, Tangguh LNG, is scheduled to come on line in 2009, temporarily reversing the decline in the countrys total LNG exports. Production from the two LNG facilities currently in operation in Indonesia is expected to continue declining [21]. In this grouping (non-OECD Asia excluding China and India), only one country, Taiwan, currently has an LNG import terminal, although there have been proposals to build regasification terminals in Singapore, Pakistan, Thailand, the Philippines, and Indonesia. In 2006, net exports equaled 24 percent of total production in the group of countries, but with domestic demand continuing to grow, imports are projected to account for 6 percent of their total natural gas consumption in 2030 in the IEO2009 reference case. On a percentage basis, Brazils natural gas production shows the most rapid growth in the reference case. Starting from 0.3 trillion cubic feet in 2006, Brazils production is projected to grow by an average of 6.6 percent per year to 2030. In 2006, Brazil depended on imports from Bolivia for nearly one-half of its natural gas consumption; in 2030, its import dependence is less than 10 percent of total consumption. In the short to mid-term, however, Brazil is planning to increase imports. Two LNG import terminals are expected to start up in 2009, and there are plans to build at least one more regasification terminal in the country [22]. At the same time, Brazil is also discussing the possibility of building an LNG liquefaction facility that would allow it to supply its own regasification terminals throughout the country or to export small volumes to neighboring countries. World Natural Gas Reserves Historically, world natural gas reserves have generally trended upward (Figure 40). As of January 1, 2009, proved world natural gas reserves, as reported by Oil & Gas Journal,17 were estimated at 6,254 trillion cubic feet 69 trillion cubic feet higher than the estimate of 6,186 trillion cubic feet for 2008 [23]. Reserves have remained relatively flat since 2004, despite growing demand for natural gas, implying that, thus far, producers have been able to continue replenishing reserves successfully with new resources over time. The largest increases in reported natural gas reserves in 2009 were for Iran and the United States. Iran added an estimated 43 trillion cubic feet (a 5-percent increase over 2008 proved reserves) and the United States added 27 trillion cubic feet (a 13-percent increase). There were smaller, but still substantial, reported increases in reserves in Indonesia, Kuwait, Venezuela, and Libya. Reserves in Indonesia and Kuwait both rose by 13 percentwith Indonesias reserves increasing by 12 trillion cubic feet and Kuwaits by 7 trillion cubic feet. Venezuela added nearly 5 trillion cubic feet of reserves (a 3-percent increase), and Libya added 4 trillion cubic feet (a 9-percent increase). Much of the increase in U.S. natural gas reserves results from expanded knowledge and exploration of shale resources. Outside the United States there has been almost no exploration of shale resources, and correspondingly little is known about the resource potential in other countries. Technologies that have greatly improved the economics of U.S. shale plays, including horizontal drilling and hydraulic fracturing, probably can be adapted to resource plays in other parts of the world. These technologies may, for instance, be applied in Europe before too long. A few North American energy companies have begun to explore potential shale plays in Central and Western Europe. At the same time, a few European energy companies have invested in North American shale plays. As the technologies are applied in other regions, economically recoverable natural gas reserves in the rest of the world are likely to increase, as they have in the United States. The largest reported declines in natural gas reserves in 2009 were in Kazakhstan (a decrease of 15 trillion cubic feet) and Qatar (13 trillion cubic feet). The Kazakhstan decline represents a 15-percent drop, although at 85 trillion cubic feet, the country still holds significant proved reserves. Given the vast resources in Qatar (now about 892 trillion cubic feet), the 2009 decrease amounts to only a 1-percent decline in the countrys total proved reserves. Turkmenistan also reported a fairly substantial decrease in reserves of 6 trillion cubic feet (6 percent). Germany and the United Kingdom reported smaller decreases, but they represent more significant shares of the two countries total reserves. For Germany, the reported decrease of 3 trillion cubic feet amounts to a 31-percent reduction in proved reserves. For the United Kingdom, the decrease of 2 trillion cubic feet amounts to a 17-percent reduction. Almost three-quarters of the worlds natural gas reserves are located in the Middle East and Eurasia (Figure 41). Russia, Iran, and Qatar together accounted for about 57 percent of the worlds natural gas reserves as of January 1, 2009 (Table 6). Despite high rates of increase in natural gas consumption, particularly over the past decade, reserves-to-production ratios for most regions are substantial. Worldwide, the reserves-to-production ratio is estimated at 63 years [24]. By region, the highest ratios are about 48 years for Central and South America, 78 years for Russia, 79 years for Africa, and more than 100 years for the Middle East. Notes and Sources References
http://www.eia.doe.gov/oiaf/ieo/nat_gas.html
crawl-002
refinedweb
6,532
51.07
Windows has been tested against all versions of Python from 2.4 to 3.2. It should work with any recent version of pywin32. When all’s said and done, it’s just a module. But for those who like setup programs: python setup.py install Or download the Windows installer and double-click. Have a look at the wmi Tutorial or the wmi Cookbook. As a quick taster, try this, to find all Automatic services which are not running and offer the option to restart each one: import wmi c = wmi.WMI () for s in c.Win32_Service (StartMode="Auto", State="Stopped"): if raw_input ("Restart %s? " % s.Caption).upper () == "Y": s.StartService () If you’re running a recent Python (2.4+) on a recent Windows (2k, 2k3, XP) and you have Mark Hammond’s win32 extensions installed, you’re probably up-and-running already. Otherwise... If you’re running Win9x / NT4 you’ll need to get WMI support from Microsoft. Microsoft URLs change quite often, so I suggest you do this: (just in case you didn’t know) Specifically, builds 154/155 fixed a problem which affected the WMI moniker construction. You can still work without this fix, but some more complex monikers will fail. (The current build is 214 so you’re probably ok unless you have some very stringent backwards-compatible requirement). (NB my own experience over several systems is that this step isn’t necessary. However, if you have problems...) You may have to compile makepy support for some typelibs. The following are reported to be significant: If you’ve not done this before, start the PythonWin environment, select Tools > Com Makepy utility from the menu, select the library by name, and click [OK].
http://svn.timgolden.me.uk/wmi/trunk/docs/_build/index.html
crawl-003
refinedweb
287
67.35
Those keep ports closed against outside attempts to connect unless a set of connections attempts are made to a predetermined list of ports in a specific order — or, at least, they provide the means of implementing port knocking through some additional utility. In other words, port knocking is like a secret knock, where you knock on a door a certain number of times with some recognizable pauses between some of the individual knocks, producing a pattern that can be used to identify the person on the other side. This can help solve the problem of needing to allow connections to a commonly used port, such as port 22 for SSH, without just leaving the port open for any schmuck in the world to start bombarding it with connection attempts in a brute-force attack on your user passwords. Not only does this help cut down on the likelihood that a user account that accepts remote connections can be compromised by a brute-force attempt to crack security, but it also eliminates a lot of opportunity for denial-of-service attacks because it will be more difficult for an attacker to find an open port to attack. I'm not here today to tell you how to set up port knocking for your firewall using some network administrator tool such as The Doorman. This post is for the people writing their own code to enforce security procedures. In particular, this is for Ruby programmers who need (or want) to implement their own port knocking solution. As of September 20, version 1.0.0 of the Fire library has been released, and it makes writing code for port knocking solutions stunningly easy. The whole thing is deceptively simple to use. To create a class containing the code necessary to validate a port knocking sequence and perform some task when that sequence is successfully supplied to your firewall system, you might write something like this: require 'fire' class PortButler < Porter def initialize(*arr) super(*arr) end def rules(pkt) return true if pkt.to_s =~ /192.168.0.10/ false end def accept(pkt) puts "Success!" # do something end endportal = PortButler.new(17, 2200, 77, 139) The firewall configuration needed to make this work will, of course, vary from system to system, depending on the type of firewall you're using. It should be pretty obvious from the above example, though, that writing your port knocking management code is a breeze with the addition of Fire.rb to your security scripting repertoire. If you're a Ruby programmer who hasn't looked into scripting security solutions for your systems before, this may nudge you in new and interesting directions. Give it a try, and see what you can learn while you do so. If, on the other hand, you're an old hand at scripting security solutions for your firewalls, proxy servers, and other network resources, this may be worth pursuing as a means of making your job easier — and capturing your interest in learning the elegant little language that everyone's talking about as the Next Big Thing in Web development. Let me know how it works out for you. Full Bio Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.
http://www.techrepublic.com/blog/it-security/use-the-firerb-library-to-write-port-knocking-scripts-in-ruby/
CC-MAIN-2017-39
refinedweb
557
55.27
This tutorial will guide you through building the example present in the examples/AsyncProgressBar directory. The nature of this example is to demonstrate one way to handle a longer task within Quart, along with an example of how to show the progress of that task, without blocking or slowing down user interaction. examples/AsyncProgressBar To run the example, in examples/AsyncProgressBar the following should start the server, (see Installation first), $ export QUART_APP=progress_bar:app $ quart run the example web app is then available at. This example uses Redis as a data store for keeping track of the state of our long task. The description as found on their site is as follows. information can be found at. Using Redis opens up a whole host of possible use cases which are beyond the scope of this tutorial, but can include: Interacting with information across multiple servers and / or applications Atomic data operations to avoid race conditions Real time analysis Caching / Queueing Storage of data along with a pre-defined Time To Live, which comes in handy for user sessions and auto log-out more… Interacting with information across multiple servers and / or applications Atomic data operations to avoid race conditions Real time analysis Caching / Queueing Storage of data along with a pre-defined Time To Live, which comes in handy for user sessions and auto log-out more… Redis is separate from Quart and does need to be installed to utilize it. Instructions on how to do that can be found here. It is always best to run python projects within a virtualenv, which should be created and activated as follows, $ cd AsyncProgressBar $ pipenv install quart aioredis redis for this blog we will need Quart, aioredis, and redis libraries. Now pipenv can be activated, $ pipenv shell First we’ll import the required libraries, and initialize the Quart web app object. import asyncio import random import aioredis import redis from quart import Quart, request, url_for, jsonify app = Quart(__name__) Then, for the purposes of this tutorial and so that you have a clean slate each time you run the app, we’ll create a synchronous connection to the Redis database and run FLUSHDB to clear any data from the last execution. In production, depending on what it is Redis and / or the app(s) are being used for, this may not be desired behavior. Please modify where necessary. FLUSHDB sr = redis.StrictRedis(host='localhost', port=6379) sr.execute_command('FLUSHDB') Let’s define an asynchronous function to handle our work called some_work(). some_work() async def some_work(): global aredis await aredis.set('state', 'running') work_to_do = range(1, 26) await aredis.set('length_of_work', len(work_to_do)) for i in work_to_do: await aredis.set('processed', i) await asyncio.sleep(random.random()) await aredis.set('state', 'ready') await aredis.set('percent', 100) What we’re doing here is setting the key state to running and then using a for loop with random.random() to simulate work that may need to be done. Once complete the state is returned to ready so that more work can be queued and performed. state running random.random() ready That’s all well and good, but how do we access that from within the web application? We’ll cover that a bit later. Next is the function to check the status of the work. This function returns a JSON response, which is used by progress() below to generate the progress bar. progress() @app.route('/check_status/') async def check_status(): global aredis, sr status = dict() try: if await aredis.get('state') == b'running': if await aredis.get('processed') != await aredis.get('lastProcessed'): processed = int(await aredis.get('processed')) await aredis.set('percent', round( processed / int(await aredis.get('length_of_work')) * 100, 2)) await aredis.set('lastProcessed', str(processed)) except: pass try: status['state'] = sr.get('state').decode() status['processed'] = sr.get('processed').decode() status['length_of_work'] = sr.get('length_of_work').decode() status['percent_complete'] = sr.get('percent').decode() except: status['state'] = sr.get('state') status['processed'] = sr.get('processed') status['length_of_work'] = sr.get('length_of_work') status['percent_complete'] = sr.get('percent') status['hint'] = 'refresh me.' return jsonify(status) in check_status(), if the state is running then we’ll retrieve information on the progress, calculate a percentage, and throw it all into a dictionary. That dictionary is then handed to jsonify() to return a JSON response. The synchronous calls to Redis were added to work around an issue where aredis did not exist yet. check_status() jsonify() aredis Next is the function to display a progress bar, to visually represent where we are in the work that is being done. This view / endpoint is just a page which uses Javascript and JQuery to poll check_status(), via AJAX, on an interval of 1000 milliseconds, as long as the percentage is less than 100. Each time the percentage changes, the bar and the text under the bar are updated. When the percentage reaches 100, then the script displays “Done!”. 1000 @app.route('/progress/') async def progress(): return """ <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Asyncio Progress Bar Demo</title> <link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <link rel="stylesheet" href="/resources/demos/style.css"> <script src=""></script> <script src=""></script> <script> var percent; function checkStatus() { $.getJSON('""" + url_for('check_status') + """', function (data) { console.log(data); percent = parseFloat(data.percent_complete); update_bar(percent); update_text(percent); }); if (percent != 100) { setTimeout(checkStatus, 1000); } } function update_bar(val) { if (val.length <= 0) { val = 0; } $( "#progressBar" ).progressbar({ value: val }); }; function update_text(val) { if (val != 100) { document.getElementById("progressData").innerHTML = " <center>"+percent+"%</center>"; } else { document.getElementById("progressData").innerHTML = " <center>Done!</center>"; } } checkStatus(); </script> </head> <body> <center><h2>Progress of work is shown below</h2></center> <div id="progressBar"></div> <div id="progressData" name="progressData"><center></center></div> </body> </html>""" Next is just a view for entering / interacting with the example, so the work can be started. It starts the work by calling the start_work() function. start_work() @app.route('/') async def index(): return 'This is the index page. Try the following to <a href="' + url_for( 'start_work') + '">start some test work</a> with a progress indicator.' The start_work() function then gets the event loop, creates an asynchronous connection to Redis. After that, if the current state is running, it will advise you to wait for the current work to finish. If the state is ready, then it will add the some_work() function to the event loop, and return an indication that the work has been started, before redirecting the user to the /progress view. /progress @app.route('/start_work/') async def start_work(): global aredis loop = asyncio.get_event_loop() aredis = await aioredis.create_redis('redis://localhost', loop=loop) if await aredis.get('state') == b'running': return "<center>Please wait for current work to finish.</center>" else: await aredis.set('state', 'ready') if await aredis.get('state') == b'ready': loop.create_task(some_work()) body = ''' <center> work started! </center> <script type="text/javascript"> window.location = "''' + url_for('progress') + '''"; </script>''' return body Finally, we run the app. if __name__ == "__main__": app.run('localhost', port=5000, debug=True) This wraps up the tutorial on performing asynchronous work withing a Quart web application. This is but one way to accomplish the handling of a long task without blocking the user interface.
https://pgjones.gitlab.io/quart/tutorials/AsyncProgressBar_tutorial.html
CC-MAIN-2020-40
refinedweb
1,206
50.53
Polygon rasterization description. More... #include <polygon_rasterizer.h> Polygon rasterization description. Constructs a polygon rasterizer description. Returns the polygon cull clipping mode. Returns the filling mode for polygons. Returns the side considered the front of a face. Returns the offsetting factor. Returns the offsetting units. Returns true if the polygons are anti-aliased. Returns true if cull testing is enabled. Returns true if lines are offset. Returns true if points are offset. Returns true if polygons are offset. Enables/disables anti-aliasing. Enables/disables polygon cull clipping. Sets the polygon cull clipping mode. Sets the filling mode for polygons. Sets which side is the front side of a face. Enables/disables line offsetting. Sets the offset factor. Sets the offset units. Enables/disables point offsetting. Enables/disables polygon offsetting.
http://gyan.fragnel.ac.in/docs/clanlib/classCL__PolygonRasterizer.html
CC-MAIN-2019-09
refinedweb
128
57.43
Say for example in this program I intend to get the sum of the elements in an array. I know it's possible with only one method but for learning's sake, lets say I'd use another method. So far I've got this: import java.util.Random; public class Main { public static void main(String args[]){ Random Rand = new Random(); int arNum[] = new int[10]; int ctr=0; int Answer; for(ctr=0;ctr<10;ctr++){ arNum[ctr] = ((Rand.nextInt(6)) + 1); } Answer = getSum(arNum[ctr]); } public static int getSum(int x[]){ int count=0; int sum = 0; for(count=0;count<=x.length;count++){ sum += x[count]; } return sum; } } Now the error is here: Answer = getSum(arNum[ctr]); Why doesn't that work?
http://www.dreamincode.net/forums/topic/313740-simple-question-about-arrays-in-methods/
CC-MAIN-2017-13
refinedweb
126
55.44
A online interactive color wheel with CIELab or HSV color space Here's a working implementation of a color wheel for OSWeb (>= 1.3.5). It shows a color circle in either CIELab or HSV color space, and the participant can select up to four colors. This can be useful for visual-working-memory experiments. Update June 18, 2020: Color circle now randomly rotates (6 possible rotations). Enjoy! Hi Sebastiaan, This is great. However, it only seems to work if you run the experiment in quick run mode, and not if you run it in browser mode (at least not on my computer: Macbook). I ran into the same problem, when I tried to program some feedback specifying how far of (in degrees) participants are from the correct location. The problem (I believe) is that despite specifying uniform coordinates, in browser mode (0,0) is nevertheless the top left of the screen such that the exact center coordinates differs a function of the resolution of the experiment window (i.e., different coordinates if you for example do or do not run the experiment full screen). For now, as a workaround, I ask participants to click on the centre of the screen and base all my calculations on those coordinates. However, it would be nice if this was not necessary. Cheers, Dirk Hi Dirk, You need to update to OSWeb 1.3.5. In older versions there was indeed a bug with mouse coordinates! Cheers, Sebastiaan Hi sebastiaan, Ah ok, I downloaded a new version from OpenSesame yesterday so I thought I was up to date :) But I am indeed running osweb 1.3.3. Can I update OSWeb without downloading a new version of OpenSesame? I cannot seem to find this information on the website. Cheer, Dirk Sorry should have been more clear. If I follow the online instructions and run from within Opensesame: import pip._internal pip._internal.main(['install', 'opensesame-extension-osweb']) it outputs Requirement already satisfied: opensesame-extension-osweb in /Applications/OpenSesame.app/Contents/Resources/lib/python2.7/site-packages (1.3.3.0) Sorry again, a long quarantine day. I forgot --upgrage. Anyway it works now, this is awesome!
https://forum.cogsci.nl/discussion/5931/a-online-interactive-color-wheel-with-cielab-or-hsv-color-space
CC-MAIN-2022-27
refinedweb
363
56.15
On Wednesday, August 13, 2003, at 03:19 pm, David Jencks wrote: > On Wednesday, August 13, 2003, at 08:06 AM, James Strachan wrote: >> On Wednesday, August 13, 2003, at 05:15 am, David Jencks wrote: >>> On Tuesday, August 12, 2003, at 03:13 PM,) ? >>> >>> -- you can deploy a pojo as an mbean with __no__ modification or >>> interface required. For instance, in JBoss 4, the jca adapter >>> objects are deployed directly as xmbeans. Thus you can see and >>> modify all the properties of a deployed ManagedConnectionFactory >>> instance. >>> --"artificial" attributes that are not actually in the underlying >>> object. >>> --attributes whose accessors don't follow the standard mbean naming >>> convention >>> --you can include lots of descriptive info to display in a >>> management console. This makes the console self documenting. >>> --with an interceptor based model mbean implementation, you can add >>> interceptors that implement additional operations. For instance, in >>> the JBoss 4 jca 1.5 support, ActivationSpec instances are deployed >>> directly as xmbeans. There is an no-arg "start" operation >>> implemented by an interceptor that is called by the JBoss lifecycle >>> management and in turn calls the jca start(ResourceAdapter) method. >>> (the ObjectName of the deployed resourceadapter instance is held in >>> an "artificial" attribute). >> >> Agreed. This is good. I like the idea of supporting POJOs and beans. >> The nice thing from Geronimo's perspective is they can be easily >> converted to MBeans and it doesn't need to worry about the different >> ways you can use to make an MBean. >> >> >>>> >>>> ?). >>> >>> How can you generate the metadata without at least minimal source >>> code markup? What is the useful stuff you are thinking of? >> >> Incidentally, I've often found making MBeans via XDoclet painful. >> Typically when I have some service, I want all its public methods and >> attributes to be visible to JMX unless I explicitly say not to. >> Typically the methods I put on the service are for JMX anywyas. So I >> don't wanna have to litter my code with @jmx:attribute and >> @jmx:operation and so forth all over the code - I find it easier to >> use the *MBean.java interface approach. > > You find writing a *MBean.java class yourself easier than including > the xdoclet tags? Yes! :) Maybe its just me, but I can take advantage of refactoring tools etc. So I can add a method to the interface & the IDE will generate a method body, or use 'extract to interface' and so forth. Its so hard to miss a doclet tag to mess things up - hacking doclet tags feels like lots of extra error-prone typing - maybe I'm just lazy. >> So I can imagine a way of generating MBeans metadata in a simple way >> with most of the metadata defaulted unless overridden by metadata >> (doclet tags or XML etc). e.g. use the normal javadoc descriptions of >> the service & attributes & methods unless I explicitly override it - >> plus default to use all public methods for introspection. >> >> e.g. this bean should be usable to generate pretty much all of the >> JMX metadata. >> >> /** An egg timer... */ >> public class EggTimer implements Serializable { >> >> /** Starts the egg timer */ >> public void start() {...} >> >> pubic int getFoo() {...} >> >> /** sets the foo thingy */ >> public void setFoo() {...} >> >> /** @jmx:hide */ >> public void hack(); >> // the above method will be hidden from JMX >> // which is the exception >> } >> >> >> Then in your build system you could have something like this >> >> <xdoclet:makeJmx> >> <fileset dir="src/java" includes="**/*Service.java"/> >> </xdoclet:makeJmx> >> > > This would be a pretty easy xdoclet template to write, although > xdoclet might need some help to distinguish between operations and > attributes. I guess we could use the javadoc for the jmx comment? Absolutely - both on the class, methods & properties. Of course it can be overloaded - I'm just discussing a defaulting mechanism if you don't happen to specify any jmx-related doclet tags. > I'm not sure how to choose between the getter and setter javadoc for > an attribute. Agreed. I've been meaning to have a go at trying this out for some time. I've discussed this with Aslak of XDoclet a few times - now he lives a mile away there's no excuse for not sorting it out. AFAIK Xdoclet2 should allow this fairly easily I hope. I basically want a pipeline approach. extract doclet tags -> transformer -> xdoclet template. e.g. take all the doclet tags & then process them with rules - adding sensible defaults for a project (like for all classes of a certain kind, name pattern, interface or package, apply some doclet tag defaulting rules first before generating the code). Should be easy to do for all kinds of XDoclet generations. >>>>> ? >>> How would it decide which operations/attributes should be exposed? >>> For the jca stuff I mentioned, the needed metadata is specified in >>> the ra.xml file. >> >> Default to all of them that are available via introspection unless >> any metadata says not to (or hides certain things). Whether this is >> via XML config file or doclet tags is optional. > > This seems quite reasonable. Phew :) James -------
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200308.mbox/%3C8B09F9AC-CD9B-11D7-A504-000A959D0312@yahoo.co.uk%3E
CC-MAIN-2016-22
refinedweb
825
64.51
Learning the basics of a Svelte component Creating a component in Svelte is extremely simple. I thought creating a functional component in React couldn’t get any simpler. import React from 'react'; const GreetMessage = () => ( <h1>Hello John, Ruben here!</h1> ); export default GreetMessage; Oh boy, was I wrong. Svelte came in and made creating components much simpler. Creating the Svelte component In your text editor, create a new file with the file extension name .svelte. You’re literally done! Or you can use your terminal. touch Greet.svelte Exporting a Svelte component In React it’s pretty simple to export a component. Let’s export the React Greet component I created above import React from 'react'; const GreetMessage = () => ( <h1>Hello John, Ruben here!</h1> ); export default GreetMessage; Once you’ve created your class or functional React component, you must use the export keyword or you can use export default to export that React component. In Svelte, by default it’s exported as soon you create your Svelte file! So you can have an empty file and it’s already exported. You don’t have to type any special keywords. Importing a component Svelte uses the import keyword just like React does. Let’s create a simple Greet component in Svelte. This component will be used as a child component. // Greet.svelte <h1>Hello John, Ruben here!</h1> <style> h1 { color: #ff3e00; text-transform: uppercase; font-size: 4em; font-weight: 100; } </style> Svelte makes writing components as native as possible. It keeps the plain syntax we’ve been use to writing for decades. Plain HTML, and plain styling. Here’s the main parent component in Svelte using the Greet component that I created above as a child component. // App.svelte <script> import Greet from './Greet.svelte'; </script> <main> <greet></greet> </main> <style> main { text-align: center; padding: 1em; max-width: 240px; margin: 0 auto; } @media (min-width: 640px) { main { max-width: none; } } </style> Inside a <script> tag, I said to import the Greet Svelte component. In Svelte, you must add the file name with the extension, otherwise it will fail the compilation. In React you can get away with import Greet from './Greet'. But not in Svelte. That’s okay, it’s not biggie. Passing props to child Svelte component This is an extremely simple step as well. First I need to modify my Greet component to accept a prop. This will feel similar if you’re using TypeScript or prop-types with your React project. I’m going to let my Svelte Greet component accept a property called name. <script> export let name; </script> <h1>Hello {name}, Ruben here!</h1> // ... styles First I need to do is in a <script> tag, define an a variable and export it. Also, I want to replace the string John with the variable, name. Back in the App.svelte file will just update the Greet component. <Greet name="John"></Greet> That’s it! If you were to not define or pass the name property, Svelte will not break, and it will not print the name property. Using state in a Svelte component Any variable inside your Svelte component <script> tag is state property. Let’s modify the App.svelte file. <script> import Greet from './Greet.svelte'; let name = "John"; setTimeout(() => { name = "Daniel"; }, 2000); </script> <main> <Greet name={name}></Greet> </main> I’ve added a new variable called, name. name is assigned to a string that says, John. I’ve also put a setTimeout() for 2 seconds. After 2 seconds change name to say Daniel. I’ve also grabbed name, and tossed to my child component. So my Svelte child component will always receive the new state data. It honestly cannot get any easier than that. I like to tweet about Svelte and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/learning-the-basics-of-a-svelte-component
CC-MAIN-2022-21
refinedweb
641
76.62
Provided by: manpages_3.54-1ubuntu1_all: $ strings /proc/1/environ epoll_create(2), eventfd(2), inotify_init(2), signalfd(2), and timerfd(2)), the entry will be a symbolic link with contents of the form anon_inode:<file-type> In some cases,. /proc/[pid]/limits (since kernel. /proc/[pid]/maps A file containing the currently mapped memory regions and their access permissions. See mmap(2) for some further information about memory mappings.>] (since Linux 3.4) A thread's stack (where the <tid> is a thread ID). It corresponds to the /proc/[pid]/task/[tid]/ path. [vdso] The virtual dynamically linked shared object. [heap] The process's heap. If the pathname field is blank, this is an anonymous mapping as obtained via the mmap(2) function. There is no easy way to coordinate this back to a process's source, short of running it through gdb(1), strace(1), or similar. filesystem (see stat(2)). in the form "type[.subtype]". (10) mount source: filesystem Linux kernel source tree. /proc/[pid]/mounts (since Linux 2.4.19) This is a list of all the filesystems currently mounted in the process's mount namespace.] terminate. namespace terminate.). . /proc/[pid]/root UNIX and Linux support the idea of a per-process root of the filesystem, present only , measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). priority %ld (18) limit provide information on real-time signals; use /proc/[pid]/status instead. blocked %lu (32) The bitmap of blocked signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. sigignore %lu (33) The bitmap of ignored signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. sigcatch %lu (34) The bitmap of caught signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. wchan %lu (35) (36) Number of pages swapped (not maintained). cnswap %lu (37) Cumulative nswap for child processes (not maintained).). * VmPeak: Peak virtual memory size. * VmSize: Virtual memory size. * VmLck: Locked memory size (see mlock(3)). * fileds mmaped, such as libraries. Shmem %lu (since Linux 2.6.32) [To be documented.] Slab %lu In-kernel data structures cache.)/vm/overcommit-accounting. /proc/sys/vm/overcommit_memory), allocations which would exceed the CommitLimit (detailed above). VmallocChunk %lu Largest contiguous block of vmalloc area which is free. HardwareCorrupted %lu (since Linux 2.6.32) (CONFIG_MEMORY_FAILURE is required.) [To be documented.] AnonHugePages %lu (since Linux 2.6.38) (CONFIG_TRANSPARENT_HUGEPAGE is required.) Non-file backed huge pages mapped into user-space page tables. Linux. Since Linux 2.6.16 this file is present only. irq (since Linux 2.6.0-test4) (6) Time servicing interrupts. softirq (since Linux 2.6.0-test4) The kernel constant NR_OPEN imposes an upper limit on the value that may be placed in) behaviour credentials behaviour filesystem for the hotplug policy agent. The default value in this file is /sbin/hotplug. /proc/sys/kernel/htab-reclaim (PowerPC only) If this file is set to a nonzero value, the PowerPC htab (see kernel file Documentation/powerpc/ppc_htab.txt) is pruned each time the system hits the idle loop. This file defines a system-wide limit specifying the maximum number of bytes in a single message written on a System V message queue. /proc/sys/kernel/msgmni (since Linux 2.4) This file defines the system-wide limit on the number of message queue identifiers. /real-root-dev This file is documented in the Linux/sched_rr_timeslice_ms (since Linux 3.9) See sched_rr_get_interval(2). . Only set this file to 1 if you have a good understanding of the semantics of the applications using System V shared memory on your system. present only if the CONFIG_MAGIC_SYSRQ kernel configuration option is enabled. For further details see the Linux(8) kill only a process that tries to access it.. Only present if the kernel was configured with CONFIG_MEMORY_FAILURE. /proc/sys/vm/memory_failure_recovery (since Linux 2.6.32) Enable memory failure recovery (when supported by the platform) 1: Attempt recovery. 0: Always panic on a memory failure. Only presentfields), sysctl(8) The Linux kernel source files: Documentation/filesystems/proc.txt and Documentation/sysctl/vm.txt. COLOPHON This page is part of release 3.54 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/trusty/man5/proc.5.html
CC-MAIN-2019-30
refinedweb
737
53.58
It's probably something extremely obvious, but I can't seem to find why the first to columns in the grid are the same. grid = [[1]*8 for n in range(8)] cellWidth = 70 def is_odd(x): return bool(x - ((x>>1)<<1)) def setup(): size(561, 561) def draw(): x,y = 0,0 for xrow, row in enumerate(grid): for xcol, col in enumerate(row): rect(x, y, cellWidth, cellWidth) if is_odd(xrow+xcol): fill(0,0,0) else: fill(255) x = x + cellWidth y = y + cellWidth x = 0 def mousePressed(): print mouseY/cellWidth, mouseX/cellWidth print is_odd(mouseY/cellWidth + mouseX/cellWidth) Looks like the fill command doesn't change the color of the last rectangle you drew; instead it changes the color of all the draw calls subsequent to it. According to the docs: Sets the color used to fill shapes. For example, if you run fill(204, 102, 0), all subsequent shapes will be filled with orange. So all of your colors are lagging one square behind. It's as if all of the tiles were shifted one to the right, except for the leftmost row which is shifted one down and eight to the left. This makes that row mismatch with all the others. Try putting your fill calls before the rect call: for xcol, col in enumerate(row): if is_odd(xrow+xcol): fill(0,0,0) else: fill(255) rect(x, y, cellWidth, cellWidth) x = x + cellWidth
https://codedump.io/share/wm5NJSqWw7zE/1/why-is-this-grid-not-even
CC-MAIN-2021-21
refinedweb
242
62.21
Tk_HandleEvent - invoke event handlers for window system events #include <tk.h> Tk_HandleEvent(eventPtr) Pointer to X event to dispatch to relevant handler(s). Tk_HandleEvent is a lower-level procedure that deals with window events. It is called by Tk_ServiceEvent (and indirectly by Tk_DoOneEvent), and in a few other cases within Tk. It makes callbacks to any window event handlers (created by calls to Tk_CreateEventHandler) that match eventPtr and then returns. In some cases it may be useful for an application to bypass the Tk event queue and call Tk_HandleEvent directly instead of calling Tk_QueueEvent followed by Tk_ServiceEvent. This. callback, event, handler, window
http://search.cpan.org/~ni-s/Tk-804.027/pod/pTk/HandleEvent.pod
CC-MAIN-2017-47
refinedweb
102
57.27
> With devfsd, I have a very nice way of implementing persistence. Ican >oh come on....a devmgr without kernel interaction would be a complicated, gross hack.How would you handle dynamic devices with your devmgr? poll every secondfor new devices? and what do you poll? Your devmgr has to get theinformation about new/expired devices from somewhere - so the kernel hasto export this information somewhere. Why not in the most logical way:directly to /dev? Richards solution is a lot cleaner:- A driver registers itself with devfs (in place of registering itsmajor/minor numbers). - Devfs tells devfsd about it. - devfsd decides the policy and communicates it to in-kernel devfs- devfs creates the device accordinglyAnd it's a similar procedure for device un-registration.A couple of points to take from this:- policy is set in userspace! it could also be feasible for this toinclude namespace aswell, if you don't like the default. - in-kernel devfs stays light-weight and clean.- all the dirty stuff - persistence, *whatever other feature you want* -is done in userspace by devfsd.So what's the problem? regards,Paul.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/1999/6/18/36
CC-MAIN-2015-06
refinedweb
210
52.26
Problem You need to quickly create an application to manage Contacts. You want both win form and web form applications. Solution Use XAF to create the win form and web form solutions and sub class the built in Person class to create the Contact class. Discussion By default XAF creates both a win form and web form application out of the box. Create a new XAF application and give it a suitable name. Since a Contact is a “kind of” Person, subclass the built in Person class to create your required Contact class, adding any extra properties you require. using System; using DevExpress.Xpo; using DevExpress.ExpressApp; using DevExpress.Persistent.Base; using DevExpress.Persistent.BaseImpl; using DevExpress.Persistent.Validation; namespace XAFCookbook_001.Module { [DefaultClassOptions] public class Contact : Person { public Contact(Session session) : base(session) { } private string _SpouseName; public string SpouseName { get { return _SpouseName; } set { SetPropertyValue("SpouseName", ref _SpouseName, value); } } } } In this example I have added the extra property SpouseName to the Contact class (using the CR template “XPS”), you should add any extra properties required in the same way. By default, the win form project is the startup project. Press F5 to launch the win form application. Note your Contact class has inherited the properties, and the view layout, from the Person class. Note also the addition of your SpouseName property. Switch the startup project to be the web form project and launch the application once more. Note you have the same layout and functionality as you did in the win form application.
http://community.devexpress.com/blogs/garyshort/archive/2008/08/29/xaf-cookbook-1-quickly-create-a-contact-management-application.aspx
crawl-002
refinedweb
252
50.23
On Sat, Mar 02, 2002 at 10:05:58PM +0000, Patrick Kirk wrote: | > other than removing and reinstalling some packages or trying a | > different libc version. Binary-only software distribution really | > sucks! | | What ? Does this mean I need to reboot into Windows to run Java apps? It's probably an issue with the browser, the plugin, the JVM and/or the libc combined together as they are. | Please say not - my wife's laptop has died. The icq is for her. So far | over the past week, she has moved from her Windows laptop to my XFCE | desktop with no comments or complaints. But if I can't get her ICQ | working, then we're regressing instead of advancing. Have you tried Everybuddy or Gaim for ICQ? There's a bunch of ICQ clients out there. | Surely there is some other Linux jvm that doesn't die upon contact with | Java? There are other JVMs (such as 'kaffe' or 'gcj') but they aren't as complete or up-to-date as the Sun/Blackdown one. I've attached the source and bytecode for a "hello world" program. Save the .class file somewhere and try "java Hello" in the directory you saved it. In addition, if you want, you can try compiling from the source yourself. (the command is "javac Hello.java", same command to run it) If this works, then you know the basics of the jvm work. If this fails, you've got bigger problems. (FWIW I use the 'j2sdk1.3' package daily at work for java development) | Patrick "Write one run anywhere else but not on my browser" Kirk :-). It's more like "write once, test everywhere you can, go back and rewrite it to work on the other platforms, then test again, then shoot yourself because it is virtually impossible to add the next feature in a cross-platform manner". (that's how I've paid the bills this past year) -D -- The Lord detests all the proud of heart. Be sure of this: They will not go unpunished. Proverbs 16:7 public class Hello { public static void main( String[] argv ) { System.out.println( "Hello World" ) ; } } Attachment: Hello.class Description: application/java-vm
https://lists.debian.org/debian-user/2002/03/msg00430.html
CC-MAIN-2016-40
refinedweb
366
75.2
Note that I build with -with-wide-int in my ./configure, which is likely relevant here. The recent changes to generate limits.h on the master branch run because my platform (MacOS Sierra) does not define things like ULLONG_WIDTH). Most of the C source compiles, but unexmacosx.c fails with: In file included from unexmacosx.c:100: ./lisp.h:93:26: error: use of undeclared identifier 'LLONG_WIDTH' enum { EMACS_INT_WIDTH = LLONG_WIDTH }; ^ ./lisp.h:119:29: error: use of undeclared identifier 'SIZE_WIDTH' enum { BITS_PER_BITS_WORD = SIZE_WIDTH }; ^ And a bunch more related errors, all because those limits.h constants are not defined. Analysis reveals that while the generated ../lib/limits.h is indeed read, it does NOT define LLONG_WIDTH, etc. The reason for this is that #if (! defined ULLONG_WIDTH \ && (defined _GNU_SOURCE || defined __STDC_WANT_IEC_60559_BFP_EXT__)) is false, because neither _GNU_SOURCE nor __STDC_WANT_IEC_60559_BFP_EXT__ are defined. The reason other code works is because it #include <config.h> which defines it before including <limits.h>, but unexmacosx.c includes <stdlib.h> before including <config.h> for reasons it describes, and this causes <limits.h> to get included as well. My fix was: diff --git a/src/unexmacosx.c b/src/unexmacosx.c index bdacc8b..4dd35fb 100644 --- a/src/unexmacosx.c +++ b/src/unexmacosx.c @@ -90,6 +90,9 @@ along with GNU Emacs. If not, see <>. */ with the #define:s in place, the prototypes will be wrong and we get warnings. To prevent that, include stdlib.h before config.h. */ +#ifndef _GNU_SOURCE +#define _GNU_SOURCE 1 +#endif #include <stdlib.h> #include <config.h> #undef malloc Regards, /Bob
https://lists.gnu.org/archive/html/emacs-devel/2016-09/msg00427.html
CC-MAIN-2018-26
refinedweb
256
56.01
Lately I was surprised to see how outdated nearly all Struts2 books and tutorials have become. So, here we go! A tutorial based on Struts 2.3.x and Eclipse. Downloading Struts 2 Library Go to Struts download page. Download the Struts library distribution as shown above. Unzip it somewhere. You will see all the JAR files in the lib folder. Create Eclipse Project Make sure that you have Eclipse IDE for Java EE Developers installed and Tomcat is configured there. Create a dynamic web project called StrutsHelloWorld in Eclipse (File > New > Dynamic Web Project). Add the following JAR files to the WEB-INF/lib folder. Configure web.xml Eclipse does not generate web.xml by default. Right click the project and select Java EE Tools > Generate Deployment Descriptor Stub. Open WEB-INF/web.xml. Register Struts2 filter by adding these lines. <filter> <filter-name>struts2</filter-name> <filter-class>org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter</filter-class> </filter> <filter-mapping> <filter-name>struts2</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Save changes. Develop the Action Class We will develop a simple action class that will take user’s name as input and print out a greeting message. Create a class called com.example.HelloWorld. Add the following code: public class HelloWorld { private String message; private String userName; public String execute() { setMessage("Hello " + getUserName()); return "ok"; } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } } Register the Action Class Struts 2 configuration file is struts.xml and is located in the class path in the default package. Create a file called struts.xml directly under the src folder. Add these lines to the file: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.3//EN" ""> <struts> <package name="default" extends="struts-default"> <action name="HelloWorld" class="com.example.HelloWorld"> <result name="ok">/success.jsp</result> </action> </package> </struts> Note, you must add the DOCTYPE. Otherwise, Struts 2 will fail to parse it at runtime. Create the JSP Files First, we will create the index.jsp page that will have a form where a user can enter her name. In the WebContent folder, create a file called index.jsp. Add the following code: <%@taglib <s:textfield <s:submit/> </s:form> </body> </html> Basically, this binds the input text field to the userName property of the action class. Next, we will create success.jsp. The file is loaded when “ok” result is returned by the execute() method. Create success.jsp and add this code: <%@taglib</h1> </body> </html> This shows the message property of the action class. Run the Application Deploy the project in tomcat. Then run this URL: Enter a name in the text box and submit the form. Make sure that you see the message. 3 thoughts on “Basic Struts2 Tutorial” Oh my goodness! Incredible article dude! Many thanks, However I am going through difficulties with your RSS. I don’t know the reason why I am unable to subscribe to it. Is there anybody else getting similar RSS problems? Anyone who knows the solution can you kindly respond? Thanks!! Thank you. What problem with RSS are you having. The link is: Thanks for finally writing about >Basic Struts2 Tutorial | mobiarch <Liked it!
https://mobiarch.wordpress.com/2014/08/11/basic-struts2-tutorial/
CC-MAIN-2018-13
refinedweb
560
61.33
14 September 2011 10:31 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The rupee has depreciated by about 9% against the US dollar since the middle of July, local players said. As a result, imported goods are no longer as competitively priced compared to domestic material, the players added. The buying interest for September cargoes in Stricter checks are being made on LLDPE containers at India’s main port of Nhava Sheva, following the discovery of some local importers who have been misdeclaring metallocene LLDPE (MLLDPE) cargoes as butane-based LLDPE, to take advantage of lower tax charges, traders said. “Every single container of LLDPE is being checked. This is lengthening the process of customs clearance,” a Mumbai-based trader said. “Usually it takes one to five days to clear a container of plastics resin. However, now it is taking at least 10 days,” a Mumbai-based converter said. Local players said the market outlook is poor as a result, especially because of weaker demand in the local downstream demand in the run-up to the Hindu festival of Diwali at the end of October. The offers of Middle East LLDPE film on 14 September were at $1,360-1,370/tonne (€993-1,000/tonne) CFR (cost & freight) Mumbai, while the offers of Iranian offers of PP raffia were at $1,550/tonne CFR Mumbai. “Demand is not improving [as we had hoped]. Although interest for imports is not strong, we do not [foresee] a significant increase in domestic sales,” a local polyolefins producer said. ($1 = €0.73) For more on polyethylene and polyprop
http://www.icis.com/Articles/2011/09/14/9492166/india-pe-pp-import-interest-weak-on-volatile-currency-slow.html
CC-MAIN-2014-49
refinedweb
265
60.65